00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4086 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3676 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.035 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.046 Using shallow fetch with depth 1 00:00:00.046 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.046 > git --version # timeout=10 00:00:00.060 > git --version # 'git version 2.39.2' 00:00:00.061 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.079 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.079 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.271 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.282 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.292 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.292 > git config core.sparsecheckout # timeout=10 00:00:02.302 > git read-tree -mu HEAD # timeout=10 00:00:02.316 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.343 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.343 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.610 [Pipeline] Start of Pipeline 00:00:02.626 [Pipeline] library 00:00:02.628 Loading library shm_lib@master 00:00:02.628 Library shm_lib@master is cached. Copying from home. 00:00:02.647 [Pipeline] node 00:00:02.658 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.659 [Pipeline] { 00:00:02.671 [Pipeline] catchError 00:00:02.673 [Pipeline] { 00:00:02.685 [Pipeline] wrap 00:00:02.693 [Pipeline] { 00:00:02.700 [Pipeline] stage 00:00:02.702 [Pipeline] { (Prologue) 00:00:02.719 [Pipeline] echo 00:00:02.721 Node: VM-host-WFP7 00:00:02.727 [Pipeline] cleanWs 00:00:02.741 [WS-CLEANUP] Deleting project workspace... 00:00:02.741 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.748 [WS-CLEANUP] done 00:00:02.964 [Pipeline] setCustomBuildProperty 00:00:03.043 [Pipeline] httpRequest 00:00:03.368 [Pipeline] echo 00:00:03.370 Sorcerer 10.211.164.20 is alive 00:00:03.382 [Pipeline] retry 00:00:03.385 [Pipeline] { 00:00:03.402 [Pipeline] httpRequest 00:00:03.407 HttpMethod: GET 00:00:03.408 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.408 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.409 Response Code: HTTP/1.1 200 OK 00:00:03.409 Success: Status code 200 is in the accepted range: 200,404 00:00:03.410 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.555 [Pipeline] } 00:00:03.566 [Pipeline] // retry 00:00:03.573 [Pipeline] sh 00:00:03.858 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.874 [Pipeline] httpRequest 00:00:04.274 [Pipeline] echo 00:00:04.277 Sorcerer 10.211.164.20 is alive 00:00:04.286 [Pipeline] retry 00:00:04.288 [Pipeline] { 00:00:04.305 [Pipeline] httpRequest 00:00:04.309 HttpMethod: GET 00:00:04.309 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.310 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.311 Response Code: HTTP/1.1 200 OK 00:00:04.311 Success: Status code 200 is in the accepted range: 200,404 00:00:04.311 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:38.784 [Pipeline] } 00:01:38.801 [Pipeline] // retry 00:01:38.808 [Pipeline] sh 00:01:39.092 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:41.642 [Pipeline] sh 00:01:41.927 + git -C spdk log --oneline -n5 00:01:41.927 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:41.927 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:41.927 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:41.927 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:41.927 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:41.948 [Pipeline] withCredentials 00:01:41.961 > git --version # timeout=10 00:01:41.975 > git --version # 'git version 2.39.2' 00:01:41.993 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:41.996 [Pipeline] { 00:01:42.006 [Pipeline] retry 00:01:42.008 [Pipeline] { 00:01:42.024 [Pipeline] sh 00:01:42.308 + git ls-remote http://dpdk.org/git/dpdk main 00:01:42.580 [Pipeline] } 00:01:42.599 [Pipeline] // retry 00:01:42.604 [Pipeline] } 00:01:42.619 [Pipeline] // withCredentials 00:01:42.628 [Pipeline] httpRequest 00:01:43.015 [Pipeline] echo 00:01:43.017 Sorcerer 10.211.164.20 is alive 00:01:43.027 [Pipeline] retry 00:01:43.029 [Pipeline] { 00:01:43.043 [Pipeline] httpRequest 00:01:43.048 HttpMethod: GET 00:01:43.049 URL: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:43.049 Sending request to url: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:43.057 Response Code: HTTP/1.1 200 OK 00:01:43.058 Success: Status code 200 is in the accepted range: 200,404 00:01:43.058 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:49.915 [Pipeline] } 00:01:49.933 [Pipeline] // retry 00:01:49.941 [Pipeline] sh 00:01:50.226 + tar --no-same-owner -xf dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:51.622 [Pipeline] sh 00:01:51.907 + git -C dpdk log --oneline -n5 00:01:51.907 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:01:51.907 a4f455560f version: 24.11-rc4 00:01:51.907 0c81db5870 dts: remove leftover node methods 00:01:51.907 71eae7fe3e doc: correct definition of stats per queue feature 00:01:51.907 f2b1510f19 net/octeon_ep: replace use of word segregate 00:01:51.926 [Pipeline] writeFile 00:01:51.942 [Pipeline] sh 00:01:52.228 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:52.242 [Pipeline] sh 00:01:52.527 + cat autorun-spdk.conf 00:01:52.527 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.527 SPDK_RUN_ASAN=1 00:01:52.527 SPDK_RUN_UBSAN=1 00:01:52.527 SPDK_TEST_RAID=1 00:01:52.527 SPDK_TEST_NATIVE_DPDK=main 00:01:52.527 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.527 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.536 RUN_NIGHTLY=1 00:01:52.539 [Pipeline] } 00:01:52.552 [Pipeline] // stage 00:01:52.567 [Pipeline] stage 00:01:52.569 [Pipeline] { (Run VM) 00:01:52.582 [Pipeline] sh 00:01:52.907 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:52.907 + echo 'Start stage prepare_nvme.sh' 00:01:52.907 Start stage prepare_nvme.sh 00:01:52.907 + [[ -n 5 ]] 00:01:52.907 + disk_prefix=ex5 00:01:52.907 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:52.907 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:52.907 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:52.907 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.907 ++ SPDK_RUN_ASAN=1 00:01:52.907 ++ SPDK_RUN_UBSAN=1 00:01:52.907 ++ SPDK_TEST_RAID=1 00:01:52.907 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:52.907 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.907 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.907 ++ RUN_NIGHTLY=1 00:01:52.907 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:52.907 + nvme_files=() 00:01:52.907 + declare -A nvme_files 00:01:52.907 + backend_dir=/var/lib/libvirt/images/backends 00:01:52.907 + nvme_files['nvme.img']=5G 00:01:52.907 + nvme_files['nvme-cmb.img']=5G 00:01:52.907 + nvme_files['nvme-multi0.img']=4G 00:01:52.907 + nvme_files['nvme-multi1.img']=4G 00:01:52.907 + nvme_files['nvme-multi2.img']=4G 00:01:52.907 + nvme_files['nvme-openstack.img']=8G 00:01:52.907 + nvme_files['nvme-zns.img']=5G 00:01:52.907 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:52.907 + (( SPDK_TEST_FTL == 1 )) 00:01:52.907 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:52.907 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:52.907 + for nvme in "${!nvme_files[@]}" 00:01:52.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:52.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:52.907 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:53.169 + echo 'End stage prepare_nvme.sh' 00:01:53.169 End stage prepare_nvme.sh 00:01:53.182 [Pipeline] sh 00:01:53.468 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:53.468 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:53.468 00:01:53.468 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:53.468 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:53.468 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:53.468 HELP=0 00:01:53.468 DRY_RUN=0 00:01:53.468 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:53.468 NVME_DISKS_TYPE=nvme,nvme, 00:01:53.468 NVME_AUTO_CREATE=0 00:01:53.468 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:53.468 NVME_CMB=,, 00:01:53.468 NVME_PMR=,, 00:01:53.468 NVME_ZNS=,, 00:01:53.468 NVME_MS=,, 00:01:53.468 NVME_FDP=,, 00:01:53.468 SPDK_VAGRANT_DISTRO=fedora39 00:01:53.468 SPDK_VAGRANT_VMCPU=10 00:01:53.468 SPDK_VAGRANT_VMRAM=12288 00:01:53.468 SPDK_VAGRANT_PROVIDER=libvirt 00:01:53.468 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:53.468 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:53.468 SPDK_OPENSTACK_NETWORK=0 00:01:53.468 VAGRANT_PACKAGE_BOX=0 00:01:53.468 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:53.468 FORCE_DISTRO=true 00:01:53.468 VAGRANT_BOX_VERSION= 00:01:53.468 EXTRA_VAGRANTFILES= 00:01:53.468 NIC_MODEL=virtio 00:01:53.468 00:01:53.468 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:53.468 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:55.378 Bringing machine 'default' up with 'libvirt' provider... 00:01:55.956 ==> default: Creating image (snapshot of base box volume). 00:01:55.956 ==> default: Creating domain with the following settings... 00:01:55.956 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732819345_79f8b767f5900a682559 00:01:55.956 ==> default: -- Domain type: kvm 00:01:55.956 ==> default: -- Cpus: 10 00:01:55.956 ==> default: -- Feature: acpi 00:01:55.956 ==> default: -- Feature: apic 00:01:55.956 ==> default: -- Feature: pae 00:01:55.956 ==> default: -- Memory: 12288M 00:01:55.956 ==> default: -- Memory Backing: hugepages: 00:01:55.956 ==> default: -- Management MAC: 00:01:55.956 ==> default: -- Loader: 00:01:55.956 ==> default: -- Nvram: 00:01:55.956 ==> default: -- Base box: spdk/fedora39 00:01:55.956 ==> default: -- Storage pool: default 00:01:55.956 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732819345_79f8b767f5900a682559.img (20G) 00:01:55.956 ==> default: -- Volume Cache: default 00:01:55.956 ==> default: -- Kernel: 00:01:55.956 ==> default: -- Initrd: 00:01:55.956 ==> default: -- Graphics Type: vnc 00:01:55.956 ==> default: -- Graphics Port: -1 00:01:55.956 ==> default: -- Graphics IP: 127.0.0.1 00:01:55.956 ==> default: -- Graphics Password: Not defined 00:01:55.956 ==> default: -- Video Type: cirrus 00:01:55.956 ==> default: -- Video VRAM: 9216 00:01:55.956 ==> default: -- Sound Type: 00:01:55.956 ==> default: -- Keymap: en-us 00:01:55.956 ==> default: -- TPM Path: 00:01:55.956 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:55.956 ==> default: -- Command line args: 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:55.956 ==> default: -> value=-drive, 00:01:55.956 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:55.956 ==> default: -> value=-drive, 00:01:55.956 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.956 ==> default: -> value=-drive, 00:01:55.956 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.956 ==> default: -> value=-drive, 00:01:55.956 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:55.956 ==> default: -> value=-device, 00:01:55.956 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.215 ==> default: Creating shared folders metadata... 00:01:56.215 ==> default: Starting domain. 00:01:58.126 ==> default: Waiting for domain to get an IP address... 00:02:13.052 ==> default: Waiting for SSH to become available... 00:02:14.431 ==> default: Configuring and enabling network interfaces... 00:02:21.023 default: SSH address: 192.168.121.212:22 00:02:21.023 default: SSH username: vagrant 00:02:21.023 default: SSH auth method: private key 00:02:23.582 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.735 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:37.006 ==> default: Mounting SSHFS shared folder... 00:02:39.543 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:39.543 ==> default: Checking Mount.. 00:02:40.939 ==> default: Folder Successfully Mounted! 00:02:40.939 ==> default: Running provisioner: file... 00:02:41.878 default: ~/.gitconfig => .gitconfig 00:02:42.448 00:02:42.448 SUCCESS! 00:02:42.448 00:02:42.448 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:42.448 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:42.448 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:42.448 00:02:42.458 [Pipeline] } 00:02:42.474 [Pipeline] // stage 00:02:42.484 [Pipeline] dir 00:02:42.485 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:42.487 [Pipeline] { 00:02:42.499 [Pipeline] catchError 00:02:42.501 [Pipeline] { 00:02:42.514 [Pipeline] sh 00:02:42.798 + vagrant ssh-config --host vagrant 00:02:42.798 + sed -ne /^Host/,$p 00:02:42.798 + tee ssh_conf 00:02:45.336 Host vagrant 00:02:45.337 HostName 192.168.121.212 00:02:45.337 User vagrant 00:02:45.337 Port 22 00:02:45.337 UserKnownHostsFile /dev/null 00:02:45.337 StrictHostKeyChecking no 00:02:45.337 PasswordAuthentication no 00:02:45.337 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:45.337 IdentitiesOnly yes 00:02:45.337 LogLevel FATAL 00:02:45.337 ForwardAgent yes 00:02:45.337 ForwardX11 yes 00:02:45.337 00:02:45.351 [Pipeline] withEnv 00:02:45.353 [Pipeline] { 00:02:45.366 [Pipeline] sh 00:02:45.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:45.665 source /etc/os-release 00:02:45.665 [[ -e /image.version ]] && img=$(< /image.version) 00:02:45.665 # Minimal, systemd-like check. 00:02:45.665 if [[ -e /.dockerenv ]]; then 00:02:45.665 # Clear garbage from the node's name: 00:02:45.665 # agt-er_autotest_547-896 -> autotest_547-896 00:02:45.665 # $HOSTNAME is the actual container id 00:02:45.665 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:45.665 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:45.665 # We can assume this is a mount from a host where container is running, 00:02:45.665 # so fetch its hostname to easily identify the target swarm worker. 00:02:45.665 container="$(< /etc/hostname) ($agent)" 00:02:45.665 else 00:02:45.665 # Fallback 00:02:45.665 container=$agent 00:02:45.665 fi 00:02:45.665 fi 00:02:45.665 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:45.665 00:02:45.946 [Pipeline] } 00:02:45.965 [Pipeline] // withEnv 00:02:45.973 [Pipeline] setCustomBuildProperty 00:02:45.986 [Pipeline] stage 00:02:45.988 [Pipeline] { (Tests) 00:02:46.003 [Pipeline] sh 00:02:46.288 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:46.563 [Pipeline] sh 00:02:46.847 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:47.123 [Pipeline] timeout 00:02:47.124 Timeout set to expire in 1 hr 30 min 00:02:47.126 [Pipeline] { 00:02:47.140 [Pipeline] sh 00:02:47.424 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:47.994 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:48.008 [Pipeline] sh 00:02:48.297 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:48.572 [Pipeline] sh 00:02:48.870 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:49.158 [Pipeline] sh 00:02:49.439 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:49.699 ++ readlink -f spdk_repo 00:02:49.699 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:49.699 + [[ -n /home/vagrant/spdk_repo ]] 00:02:49.699 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:49.699 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:49.699 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:49.699 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:49.699 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:49.699 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:49.699 + cd /home/vagrant/spdk_repo 00:02:49.699 + source /etc/os-release 00:02:49.699 ++ NAME='Fedora Linux' 00:02:49.699 ++ VERSION='39 (Cloud Edition)' 00:02:49.699 ++ ID=fedora 00:02:49.699 ++ VERSION_ID=39 00:02:49.699 ++ VERSION_CODENAME= 00:02:49.699 ++ PLATFORM_ID=platform:f39 00:02:49.699 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:49.699 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:49.699 ++ LOGO=fedora-logo-icon 00:02:49.699 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:49.699 ++ HOME_URL=https://fedoraproject.org/ 00:02:49.699 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:49.699 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:49.699 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:49.699 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:49.699 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:49.699 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:49.699 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:49.699 ++ SUPPORT_END=2024-11-12 00:02:49.699 ++ VARIANT='Cloud Edition' 00:02:49.699 ++ VARIANT_ID=cloud 00:02:49.699 + uname -a 00:02:49.699 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:49.699 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:50.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:50.268 Hugepages 00:02:50.268 node hugesize free / total 00:02:50.268 node0 1048576kB 0 / 0 00:02:50.268 node0 2048kB 0 / 0 00:02:50.268 00:02:50.268 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:50.268 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:50.268 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:50.268 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:50.268 + rm -f /tmp/spdk-ld-path 00:02:50.268 + source autorun-spdk.conf 00:02:50.268 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.268 ++ SPDK_RUN_ASAN=1 00:02:50.268 ++ SPDK_RUN_UBSAN=1 00:02:50.268 ++ SPDK_TEST_RAID=1 00:02:50.268 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:50.268 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:50.268 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.268 ++ RUN_NIGHTLY=1 00:02:50.268 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:50.268 + [[ -n '' ]] 00:02:50.268 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:50.268 + for M in /var/spdk/build-*-manifest.txt 00:02:50.268 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:50.268 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.268 + for M in /var/spdk/build-*-manifest.txt 00:02:50.268 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:50.268 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.268 + for M in /var/spdk/build-*-manifest.txt 00:02:50.268 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:50.268 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.268 ++ uname 00:02:50.527 + [[ Linux == \L\i\n\u\x ]] 00:02:50.527 + sudo dmesg -T 00:02:50.528 + sudo dmesg --clear 00:02:50.528 + dmesg_pid=6161 00:02:50.528 + [[ Fedora Linux == FreeBSD ]] 00:02:50.528 + sudo dmesg -Tw 00:02:50.528 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.528 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.528 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:50.528 + [[ -x /usr/src/fio-static/fio ]] 00:02:50.528 + export FIO_BIN=/usr/src/fio-static/fio 00:02:50.528 + FIO_BIN=/usr/src/fio-static/fio 00:02:50.528 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:50.528 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:50.528 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:50.528 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.528 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.528 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:50.528 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.528 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.528 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.528 18:43:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.528 18:43:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.528 18:43:20 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:50.528 18:43:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:50.528 18:43:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.788 18:43:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.788 18:43:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:50.788 18:43:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:50.788 18:43:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:50.788 18:43:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.788 18:43:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.788 18:43:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.788 18:43:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.788 18:43:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.788 18:43:20 -- paths/export.sh@5 -- $ export PATH 00:02:50.788 18:43:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.788 18:43:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:50.788 18:43:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:50.788 18:43:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732819400.XXXXXX 00:02:50.788 18:43:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732819400.emz71Q 00:02:50.788 18:43:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:50.788 18:43:20 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:50.788 18:43:20 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:50.788 18:43:20 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:50.788 18:43:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:50.788 18:43:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:50.788 18:43:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:50.788 18:43:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:50.788 18:43:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.788 18:43:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:50.788 18:43:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:50.788 18:43:20 -- pm/common@17 -- $ local monitor 00:02:50.788 18:43:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.788 18:43:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.788 18:43:20 -- pm/common@25 -- $ sleep 1 00:02:50.788 18:43:20 -- pm/common@21 -- $ date +%s 00:02:50.788 18:43:20 -- pm/common@21 -- $ date +%s 00:02:50.788 18:43:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732819400 00:02:50.788 18:43:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732819400 00:02:50.788 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732819400_collect-vmstat.pm.log 00:02:50.788 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732819400_collect-cpu-load.pm.log 00:02:51.728 18:43:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:51.728 18:43:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:51.728 18:43:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:51.728 18:43:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.728 18:43:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:51.728 Thu Nov 28 06:43:21 PM UTC 2024 00:02:51.728 18:43:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:51.728 v25.01-pre-276-g35cd3e84d 00:02:51.728 18:43:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:51.728 18:43:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:51.728 18:43:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.728 18:43:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.728 18:43:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.728 ************************************ 00:02:51.728 START TEST asan 00:02:51.728 ************************************ 00:02:51.728 using asan 00:02:51.728 18:43:21 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:51.728 00:02:51.728 real 0m0.000s 00:02:51.728 user 0m0.000s 00:02:51.728 sys 0m0.000s 00:02:51.728 18:43:21 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.728 18:43:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.728 ************************************ 00:02:51.728 END TEST asan 00:02:51.728 ************************************ 00:02:51.728 18:43:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:51.728 18:43:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:51.728 18:43:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.728 18:43:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.728 18:43:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.728 ************************************ 00:02:51.728 START TEST ubsan 00:02:51.728 ************************************ 00:02:51.728 using ubsan 00:02:51.728 18:43:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:51.728 00:02:51.728 real 0m0.000s 00:02:51.728 user 0m0.000s 00:02:51.728 sys 0m0.000s 00:02:51.728 18:43:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.728 18:43:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.728 ************************************ 00:02:51.728 END TEST ubsan 00:02:51.728 ************************************ 00:02:51.989 18:43:21 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:51.989 18:43:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:51.989 18:43:21 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:51.989 18:43:21 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:51.989 18:43:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.989 18:43:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.989 ************************************ 00:02:51.989 START TEST build_native_dpdk 00:02:51.989 ************************************ 00:02:51.989 18:43:21 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:51.989 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:51.989 a4f455560f version: 24.11-rc4 00:02:51.989 0c81db5870 dts: remove leftover node methods 00:02:51.989 71eae7fe3e doc: correct definition of stats per queue feature 00:02:51.989 f2b1510f19 net/octeon_ep: replace use of word segregate 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:51.989 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:51.989 18:43:21 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:51.989 patching file config/rte_config.h 00:02:51.990 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:51.990 18:43:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:51.990 patching file drivers/bus/pci/linux/pci_uio.c 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:51.990 18:43:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:58.569 The Meson build system 00:02:58.569 Version: 1.5.0 00:02:58.569 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:58.569 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:58.569 Build type: native build 00:02:58.569 Project name: DPDK 00:02:58.569 Project version: 24.11.0-rc4 00:02:58.569 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:58.569 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:58.569 Host machine cpu family: x86_64 00:02:58.569 Host machine cpu: x86_64 00:02:58.569 Message: ## Building in Developer Mode ## 00:02:58.569 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:58.569 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:58.569 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:58.569 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:58.569 Program cat found: YES (/usr/bin/cat) 00:02:58.569 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:58.569 Compiler for C supports arguments -march=native: YES 00:02:58.569 Checking for size of "void *" : 8 00:02:58.569 Checking for size of "void *" : 8 (cached) 00:02:58.569 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:58.569 Library m found: YES 00:02:58.569 Library numa found: YES 00:02:58.569 Has header "numaif.h" : YES 00:02:58.569 Library fdt found: NO 00:02:58.569 Library execinfo found: NO 00:02:58.569 Has header "execinfo.h" : YES 00:02:58.569 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:58.569 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:58.569 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:58.569 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:58.569 Run-time dependency openssl found: YES 3.1.1 00:02:58.569 Run-time dependency libpcap found: YES 1.10.4 00:02:58.569 Has header "pcap.h" with dependency libpcap: YES 00:02:58.569 Compiler for C supports arguments -Wcast-qual: YES 00:02:58.569 Compiler for C supports arguments -Wdeprecated: YES 00:02:58.569 Compiler for C supports arguments -Wformat: YES 00:02:58.569 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:58.569 Compiler for C supports arguments -Wformat-security: NO 00:02:58.569 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:58.569 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:58.569 Compiler for C supports arguments -Wnested-externs: YES 00:02:58.569 Compiler for C supports arguments -Wold-style-definition: YES 00:02:58.569 Compiler for C supports arguments -Wpointer-arith: YES 00:02:58.569 Compiler for C supports arguments -Wsign-compare: YES 00:02:58.569 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:58.569 Compiler for C supports arguments -Wundef: YES 00:02:58.569 Compiler for C supports arguments -Wwrite-strings: YES 00:02:58.569 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:58.569 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:58.569 Program objdump found: YES (/usr/bin/objdump) 00:02:58.569 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:58.569 Checking if "AVX512 checking" compiles: YES 00:02:58.569 Fetching value of define "__AVX512F__" : 1 00:02:58.569 Fetching value of define "__AVX512BW__" : 1 00:02:58.569 Fetching value of define "__AVX512DQ__" : 1 00:02:58.569 Fetching value of define "__AVX512VL__" : 1 00:02:58.569 Fetching value of define "__SSE4_2__" : 1 00:02:58.569 Fetching value of define "__AES__" : 1 00:02:58.569 Fetching value of define "__AVX__" : 1 00:02:58.569 Fetching value of define "__AVX2__" : 1 00:02:58.569 Fetching value of define "__AVX512BW__" : 1 00:02:58.569 Fetching value of define "__AVX512CD__" : 1 00:02:58.569 Fetching value of define "__AVX512DQ__" : 1 00:02:58.569 Fetching value of define "__AVX512F__" : 1 00:02:58.569 Fetching value of define "__AVX512VL__" : 1 00:02:58.569 Fetching value of define "__PCLMUL__" : 1 00:02:58.569 Fetching value of define "__RDRND__" : 1 00:02:58.569 Fetching value of define "__RDSEED__" : 1 00:02:58.569 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:58.569 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:58.569 Message: lib/log: Defining dependency "log" 00:02:58.569 Message: lib/kvargs: Defining dependency "kvargs" 00:02:58.569 Message: lib/argparse: Defining dependency "argparse" 00:02:58.569 Message: lib/telemetry: Defining dependency "telemetry" 00:02:58.569 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:58.569 Checking for function "getentropy" : NO 00:02:58.569 Message: lib/eal: Defining dependency "eal" 00:02:58.569 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:58.570 Message: lib/ring: Defining dependency "ring" 00:02:58.570 Message: lib/rcu: Defining dependency "rcu" 00:02:58.570 Message: lib/mempool: Defining dependency "mempool" 00:02:58.570 Message: lib/mbuf: Defining dependency "mbuf" 00:02:58.570 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:58.570 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:58.570 Compiler for C supports arguments -mpclmul: YES 00:02:58.570 Compiler for C supports arguments -maes: YES 00:02:58.570 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:58.570 Message: lib/net: Defining dependency "net" 00:02:58.570 Message: lib/meter: Defining dependency "meter" 00:02:58.570 Message: lib/ethdev: Defining dependency "ethdev" 00:02:58.570 Message: lib/pci: Defining dependency "pci" 00:02:58.570 Message: lib/cmdline: Defining dependency "cmdline" 00:02:58.570 Message: lib/metrics: Defining dependency "metrics" 00:02:58.570 Message: lib/hash: Defining dependency "hash" 00:02:58.570 Message: lib/timer: Defining dependency "timer" 00:02:58.570 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.570 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:58.570 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:58.570 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:58.570 Message: lib/acl: Defining dependency "acl" 00:02:58.570 Message: lib/bbdev: Defining dependency "bbdev" 00:02:58.570 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:58.570 Run-time dependency libelf found: YES 0.191 00:02:58.570 Message: lib/bpf: Defining dependency "bpf" 00:02:58.570 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:58.570 Message: lib/compressdev: Defining dependency "compressdev" 00:02:58.570 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:58.570 Message: lib/distributor: Defining dependency "distributor" 00:02:58.570 Message: lib/dmadev: Defining dependency "dmadev" 00:02:58.570 Message: lib/efd: Defining dependency "efd" 00:02:58.570 Message: lib/eventdev: Defining dependency "eventdev" 00:02:58.570 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:58.570 Message: lib/gpudev: Defining dependency "gpudev" 00:02:58.570 Message: lib/gro: Defining dependency "gro" 00:02:58.570 Message: lib/gso: Defining dependency "gso" 00:02:58.570 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:58.570 Message: lib/jobstats: Defining dependency "jobstats" 00:02:58.570 Message: lib/latencystats: Defining dependency "latencystats" 00:02:58.570 Message: lib/lpm: Defining dependency "lpm" 00:02:58.570 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.570 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:58.570 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:58.570 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:58.570 Message: lib/member: Defining dependency "member" 00:02:58.570 Message: lib/pcapng: Defining dependency "pcapng" 00:02:58.570 Message: lib/power: Defining dependency "power" 00:02:58.570 Message: lib/rawdev: Defining dependency "rawdev" 00:02:58.570 Message: lib/regexdev: Defining dependency "regexdev" 00:02:58.570 Message: lib/mldev: Defining dependency "mldev" 00:02:58.570 Message: lib/rib: Defining dependency "rib" 00:02:58.570 Message: lib/reorder: Defining dependency "reorder" 00:02:58.570 Message: lib/sched: Defining dependency "sched" 00:02:58.570 Message: lib/security: Defining dependency "security" 00:02:58.570 Message: lib/stack: Defining dependency "stack" 00:02:58.570 Has header "linux/userfaultfd.h" : YES 00:02:58.570 Has header "linux/vduse.h" : YES 00:02:58.570 Message: lib/vhost: Defining dependency "vhost" 00:02:58.570 Message: lib/ipsec: Defining dependency "ipsec" 00:02:58.570 Message: lib/pdcp: Defining dependency "pdcp" 00:02:58.570 Message: lib/fib: Defining dependency "fib" 00:02:58.570 Message: lib/port: Defining dependency "port" 00:02:58.570 Message: lib/pdump: Defining dependency "pdump" 00:02:58.570 Message: lib/table: Defining dependency "table" 00:02:58.570 Message: lib/pipeline: Defining dependency "pipeline" 00:02:58.570 Message: lib/graph: Defining dependency "graph" 00:02:58.570 Message: lib/node: Defining dependency "node" 00:02:58.570 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:58.570 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:58.570 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:58.570 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:58.570 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:58.570 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:58.570 Compiler for C supports arguments -Wno-unused-value: YES 00:02:58.570 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:58.570 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:58.570 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:58.570 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:58.570 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:58.570 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:58.570 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:58.570 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:58.570 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:58.570 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:58.570 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:58.570 Has header "sys/epoll.h" : YES 00:02:58.570 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:58.570 Configuring doxy-api-html.conf using configuration 00:02:58.570 Configuring doxy-api-man.conf using configuration 00:02:58.570 Program mandb found: YES (/usr/bin/mandb) 00:02:58.570 Program sphinx-build found: NO 00:02:58.570 Program sphinx-build found: NO 00:02:58.570 Configuring rte_build_config.h using configuration 00:02:58.570 Message: 00:02:58.570 ================= 00:02:58.570 Applications Enabled 00:02:58.570 ================= 00:02:58.570 00:02:58.570 apps: 00:02:58.570 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:58.570 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:58.570 test-pmd, test-regex, test-sad, test-security-perf, 00:02:58.570 00:02:58.570 Message: 00:02:58.570 ================= 00:02:58.570 Libraries Enabled 00:02:58.570 ================= 00:02:58.570 00:02:58.570 libs: 00:02:58.570 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:58.570 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:58.570 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:58.570 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:58.570 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:58.570 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:58.570 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:58.570 graph, node, 00:02:58.570 00:02:58.570 Message: 00:02:58.570 =============== 00:02:58.570 Drivers Enabled 00:02:58.570 =============== 00:02:58.570 00:02:58.570 common: 00:02:58.570 00:02:58.570 bus: 00:02:58.570 pci, vdev, 00:02:58.570 mempool: 00:02:58.570 ring, 00:02:58.570 dma: 00:02:58.570 00:02:58.570 net: 00:02:58.570 i40e, 00:02:58.570 raw: 00:02:58.570 00:02:58.570 crypto: 00:02:58.570 00:02:58.570 compress: 00:02:58.570 00:02:58.570 regex: 00:02:58.570 00:02:58.570 ml: 00:02:58.570 00:02:58.570 vdpa: 00:02:58.570 00:02:58.570 event: 00:02:58.570 00:02:58.570 baseband: 00:02:58.570 00:02:58.570 gpu: 00:02:58.570 00:02:58.570 power: 00:02:58.570 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:58.570 00:02:58.570 Message: 00:02:58.570 ================= 00:02:58.570 Content Skipped 00:02:58.570 ================= 00:02:58.571 00:02:58.571 apps: 00:02:58.571 00:02:58.571 libs: 00:02:58.571 00:02:58.571 drivers: 00:02:58.571 common/cpt: not in enabled drivers build config 00:02:58.571 common/dpaax: not in enabled drivers build config 00:02:58.571 common/iavf: not in enabled drivers build config 00:02:58.571 common/idpf: not in enabled drivers build config 00:02:58.571 common/ionic: not in enabled drivers build config 00:02:58.571 common/mvep: not in enabled drivers build config 00:02:58.571 common/octeontx: not in enabled drivers build config 00:02:58.571 bus/auxiliary: not in enabled drivers build config 00:02:58.571 bus/cdx: not in enabled drivers build config 00:02:58.571 bus/dpaa: not in enabled drivers build config 00:02:58.571 bus/fslmc: not in enabled drivers build config 00:02:58.571 bus/ifpga: not in enabled drivers build config 00:02:58.571 bus/platform: not in enabled drivers build config 00:02:58.571 bus/uacce: not in enabled drivers build config 00:02:58.571 bus/vmbus: not in enabled drivers build config 00:02:58.571 common/cnxk: not in enabled drivers build config 00:02:58.571 common/mlx5: not in enabled drivers build config 00:02:58.571 common/nfp: not in enabled drivers build config 00:02:58.571 common/nitrox: not in enabled drivers build config 00:02:58.571 common/qat: not in enabled drivers build config 00:02:58.571 common/sfc_efx: not in enabled drivers build config 00:02:58.571 mempool/bucket: not in enabled drivers build config 00:02:58.571 mempool/cnxk: not in enabled drivers build config 00:02:58.571 mempool/dpaa: not in enabled drivers build config 00:02:58.571 mempool/dpaa2: not in enabled drivers build config 00:02:58.571 mempool/octeontx: not in enabled drivers build config 00:02:58.571 mempool/stack: not in enabled drivers build config 00:02:58.571 dma/cnxk: not in enabled drivers build config 00:02:58.571 dma/dpaa: not in enabled drivers build config 00:02:58.571 dma/dpaa2: not in enabled drivers build config 00:02:58.571 dma/hisilicon: not in enabled drivers build config 00:02:58.571 dma/idxd: not in enabled drivers build config 00:02:58.571 dma/ioat: not in enabled drivers build config 00:02:58.571 dma/odm: not in enabled drivers build config 00:02:58.571 dma/skeleton: not in enabled drivers build config 00:02:58.571 net/af_packet: not in enabled drivers build config 00:02:58.571 net/af_xdp: not in enabled drivers build config 00:02:58.571 net/ark: not in enabled drivers build config 00:02:58.571 net/atlantic: not in enabled drivers build config 00:02:58.571 net/avp: not in enabled drivers build config 00:02:58.571 net/axgbe: not in enabled drivers build config 00:02:58.571 net/bnx2x: not in enabled drivers build config 00:02:58.571 net/bnxt: not in enabled drivers build config 00:02:58.571 net/bonding: not in enabled drivers build config 00:02:58.571 net/cnxk: not in enabled drivers build config 00:02:58.571 net/cpfl: not in enabled drivers build config 00:02:58.571 net/cxgbe: not in enabled drivers build config 00:02:58.571 net/dpaa: not in enabled drivers build config 00:02:58.571 net/dpaa2: not in enabled drivers build config 00:02:58.571 net/e1000: not in enabled drivers build config 00:02:58.571 net/ena: not in enabled drivers build config 00:02:58.571 net/enetc: not in enabled drivers build config 00:02:58.571 net/enetfec: not in enabled drivers build config 00:02:58.571 net/enic: not in enabled drivers build config 00:02:58.571 net/failsafe: not in enabled drivers build config 00:02:58.571 net/fm10k: not in enabled drivers build config 00:02:58.571 net/gve: not in enabled drivers build config 00:02:58.571 net/hinic: not in enabled drivers build config 00:02:58.571 net/hns3: not in enabled drivers build config 00:02:58.571 net/iavf: not in enabled drivers build config 00:02:58.571 net/ice: not in enabled drivers build config 00:02:58.571 net/idpf: not in enabled drivers build config 00:02:58.571 net/igc: not in enabled drivers build config 00:02:58.571 net/ionic: not in enabled drivers build config 00:02:58.571 net/ipn3ke: not in enabled drivers build config 00:02:58.571 net/ixgbe: not in enabled drivers build config 00:02:58.571 net/mana: not in enabled drivers build config 00:02:58.571 net/memif: not in enabled drivers build config 00:02:58.571 net/mlx4: not in enabled drivers build config 00:02:58.571 net/mlx5: not in enabled drivers build config 00:02:58.571 net/mvneta: not in enabled drivers build config 00:02:58.571 net/mvpp2: not in enabled drivers build config 00:02:58.571 net/netvsc: not in enabled drivers build config 00:02:58.571 net/nfb: not in enabled drivers build config 00:02:58.571 net/nfp: not in enabled drivers build config 00:02:58.571 net/ngbe: not in enabled drivers build config 00:02:58.571 net/ntnic: not in enabled drivers build config 00:02:58.571 net/null: not in enabled drivers build config 00:02:58.571 net/octeontx: not in enabled drivers build config 00:02:58.571 net/octeon_ep: not in enabled drivers build config 00:02:58.571 net/pcap: not in enabled drivers build config 00:02:58.571 net/pfe: not in enabled drivers build config 00:02:58.571 net/qede: not in enabled drivers build config 00:02:58.571 net/r8169: not in enabled drivers build config 00:02:58.571 net/ring: not in enabled drivers build config 00:02:58.571 net/sfc: not in enabled drivers build config 00:02:58.571 net/softnic: not in enabled drivers build config 00:02:58.571 net/tap: not in enabled drivers build config 00:02:58.571 net/thunderx: not in enabled drivers build config 00:02:58.571 net/txgbe: not in enabled drivers build config 00:02:58.571 net/vdev_netvsc: not in enabled drivers build config 00:02:58.571 net/vhost: not in enabled drivers build config 00:02:58.571 net/virtio: not in enabled drivers build config 00:02:58.571 net/vmxnet3: not in enabled drivers build config 00:02:58.571 net/zxdh: not in enabled drivers build config 00:02:58.571 raw/cnxk_bphy: not in enabled drivers build config 00:02:58.571 raw/cnxk_gpio: not in enabled drivers build config 00:02:58.571 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:58.571 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:58.571 raw/gdtc: not in enabled drivers build config 00:02:58.571 raw/ifpga: not in enabled drivers build config 00:02:58.571 raw/ntb: not in enabled drivers build config 00:02:58.571 raw/skeleton: not in enabled drivers build config 00:02:58.571 crypto/armv8: not in enabled drivers build config 00:02:58.571 crypto/bcmfs: not in enabled drivers build config 00:02:58.571 crypto/caam_jr: not in enabled drivers build config 00:02:58.571 crypto/ccp: not in enabled drivers build config 00:02:58.571 crypto/cnxk: not in enabled drivers build config 00:02:58.571 crypto/dpaa_sec: not in enabled drivers build config 00:02:58.571 crypto/dpaa2_sec: not in enabled drivers build config 00:02:58.571 crypto/ionic: not in enabled drivers build config 00:02:58.571 crypto/ipsec_mb: not in enabled drivers build config 00:02:58.571 crypto/mlx5: not in enabled drivers build config 00:02:58.571 crypto/mvsam: not in enabled drivers build config 00:02:58.571 crypto/nitrox: not in enabled drivers build config 00:02:58.571 crypto/null: not in enabled drivers build config 00:02:58.571 crypto/octeontx: not in enabled drivers build config 00:02:58.571 crypto/openssl: not in enabled drivers build config 00:02:58.571 crypto/scheduler: not in enabled drivers build config 00:02:58.571 crypto/uadk: not in enabled drivers build config 00:02:58.571 crypto/virtio: not in enabled drivers build config 00:02:58.571 compress/isal: not in enabled drivers build config 00:02:58.571 compress/mlx5: not in enabled drivers build config 00:02:58.571 compress/nitrox: not in enabled drivers build config 00:02:58.571 compress/octeontx: not in enabled drivers build config 00:02:58.571 compress/uadk: not in enabled drivers build config 00:02:58.571 compress/zlib: not in enabled drivers build config 00:02:58.572 regex/mlx5: not in enabled drivers build config 00:02:58.572 regex/cn9k: not in enabled drivers build config 00:02:58.572 ml/cnxk: not in enabled drivers build config 00:02:58.572 vdpa/ifc: not in enabled drivers build config 00:02:58.572 vdpa/mlx5: not in enabled drivers build config 00:02:58.572 vdpa/nfp: not in enabled drivers build config 00:02:58.572 vdpa/sfc: not in enabled drivers build config 00:02:58.572 event/cnxk: not in enabled drivers build config 00:02:58.572 event/dlb2: not in enabled drivers build config 00:02:58.572 event/dpaa: not in enabled drivers build config 00:02:58.572 event/dpaa2: not in enabled drivers build config 00:02:58.572 event/dsw: not in enabled drivers build config 00:02:58.572 event/opdl: not in enabled drivers build config 00:02:58.572 event/skeleton: not in enabled drivers build config 00:02:58.572 event/sw: not in enabled drivers build config 00:02:58.572 event/octeontx: not in enabled drivers build config 00:02:58.572 baseband/acc: not in enabled drivers build config 00:02:58.572 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:58.572 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:58.572 baseband/la12xx: not in enabled drivers build config 00:02:58.572 baseband/null: not in enabled drivers build config 00:02:58.572 baseband/turbo_sw: not in enabled drivers build config 00:02:58.572 gpu/cuda: not in enabled drivers build config 00:02:58.572 power/amd_uncore: not in enabled drivers build config 00:02:58.572 00:02:58.572 00:02:58.572 Message: DPDK build config complete: 00:02:58.572 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:58.572 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:58.572 Build targets in project: 246 00:02:58.572 00:02:58.572 DPDK 24.11.0-rc4 00:02:58.572 00:02:58.572 User defined options 00:02:58.572 libdir : lib 00:02:58.572 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:58.572 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:58.572 c_link_args : 00:02:58.572 enable_docs : false 00:02:58.572 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:58.572 enable_kmods : false 00:02:59.141 machine : native 00:02:59.141 tests : false 00:02:59.141 00:02:59.141 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.141 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:59.141 18:43:28 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:59.141 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.400 [1/766] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:59.400 [2/766] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:59.400 [3/766] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:59.400 [4/766] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:59.401 [5/766] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.401 [6/766] Linking static target lib/librte_kvargs.a 00:02:59.401 [7/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.401 [8/766] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.401 [9/766] Linking static target lib/librte_log.a 00:02:59.401 [10/766] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:59.401 [11/766] Linking static target lib/librte_argparse.a 00:02:59.660 [12/766] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.660 [13/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:59.660 [14/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.660 [15/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.660 [16/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:59.660 [17/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.660 [18/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:59.660 [19/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.660 [20/766] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.919 [21/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:59.919 [22/766] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.919 [23/766] Linking target lib/librte_log.so.25.0 00:02:59.919 [24/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:59.919 [25/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.179 [26/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:03:00.179 [27/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.179 [28/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.179 [29/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.179 [30/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.179 [31/766] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:00.179 [32/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.179 [33/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.179 [34/766] Linking static target lib/librte_telemetry.a 00:03:00.179 [35/766] Linking target lib/librte_kvargs.so.25.0 00:03:00.179 [36/766] Linking target lib/librte_argparse.so.25.0 00:03:00.439 [37/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.439 [38/766] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:00.439 [39/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:00.439 [40/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:00.439 [41/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.439 [42/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:00.698 [43/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.698 [44/766] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.698 [45/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.698 [46/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.698 [47/766] Linking target lib/librte_telemetry.so.25.0 00:03:00.698 [48/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:00.698 [49/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:00.698 [50/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:00.698 [51/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:03:00.698 [52/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:00.698 [53/766] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:00.958 [54/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.958 [55/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.958 [56/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:00.958 [57/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.958 [58/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:01.218 [59/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.218 [60/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:01.218 [61/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:01.218 [62/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:01.218 [63/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:01.218 [64/766] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:01.218 [65/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:01.477 [66/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:01.477 [67/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:01.477 [68/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:01.477 [69/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:01.477 [70/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:01.477 [71/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:01.477 [72/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:01.477 [73/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:01.477 [74/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:01.478 [75/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:01.737 [76/766] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:01.737 [77/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:01.737 [78/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:01.737 [79/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:01.998 [80/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:01.998 [81/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:01.998 [82/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:01.998 [83/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:01.998 [84/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:01.998 [85/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:01.998 [86/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:01.998 [87/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:01.998 [88/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.257 [89/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:02.257 [90/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:02.257 [91/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:02.257 [92/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:02.257 [93/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:02.258 [94/766] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:02.258 [95/766] Linking static target lib/librte_ring.a 00:03:02.517 [96/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:02.517 [97/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:02.517 [98/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:02.517 [99/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:02.517 [100/766] Linking static target lib/librte_eal.a 00:03:02.517 [101/766] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.517 [102/766] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:02.777 [103/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:02.777 [104/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:02.777 [105/766] Linking static target lib/librte_mempool.a 00:03:02.777 [106/766] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:02.777 [107/766] Linking static target lib/librte_rcu.a 00:03:03.037 [108/766] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:03.037 [109/766] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:03.037 [110/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:03.037 [111/766] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:03.037 [112/766] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:03.037 [113/766] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.037 [114/766] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:03.037 [115/766] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.297 [116/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:03.297 [117/766] Linking static target lib/librte_mbuf.a 00:03:03.297 [118/766] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.297 [119/766] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.297 [120/766] Linking static target lib/librte_meter.a 00:03:03.297 [121/766] Linking static target lib/librte_net.a 00:03:03.297 [122/766] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.557 [123/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:03.557 [124/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:03.557 [125/766] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.557 [126/766] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.557 [127/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.557 [128/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:03.818 [129/766] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.818 [130/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:04.078 [131/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.337 [132/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.337 [133/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.337 [134/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:04.337 [135/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.337 [136/766] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:04.337 [137/766] Linking static target lib/librte_pci.a 00:03:04.337 [138/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.337 [139/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.337 [140/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.337 [141/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.597 [142/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.597 [143/766] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.597 [144/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.597 [145/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.597 [146/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.597 [147/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:04.597 [148/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.597 [149/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.597 [150/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.857 [151/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.857 [152/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.857 [153/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.857 [154/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.857 [155/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.857 [156/766] Linking static target lib/librte_cmdline.a 00:03:05.117 [157/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:05.117 [158/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:05.117 [159/766] Linking static target lib/librte_metrics.a 00:03:05.117 [160/766] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:05.117 [161/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.117 [162/766] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:05.377 [163/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:05.377 [164/766] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.636 [165/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.636 [166/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:03:05.636 [167/766] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.636 [168/766] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:05.636 [169/766] Linking static target lib/librte_timer.a 00:03:05.895 [170/766] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:05.895 [171/766] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:05.895 [172/766] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:06.154 [173/766] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.154 [174/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:06.413 [175/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:06.413 [176/766] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:06.413 [177/766] Linking static target lib/librte_bitratestats.a 00:03:06.687 [178/766] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.687 [179/766] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:06.687 [180/766] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:06.687 [181/766] Linking static target lib/librte_bbdev.a 00:03:06.974 [182/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:06.974 [183/766] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:06.974 [184/766] Linking static target lib/librte_hash.a 00:03:06.974 [185/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:06.974 [186/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:06.974 [187/766] Linking static target lib/librte_ethdev.a 00:03:06.974 [188/766] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.234 [189/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:07.234 [190/766] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:07.234 [191/766] Linking static target lib/acl/libavx2_tmp.a 00:03:07.494 [192/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:07.494 [193/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:07.494 [194/766] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.494 [195/766] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.494 [196/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:07.494 [197/766] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:07.494 [198/766] Linking target lib/librte_eal.so.25.0 00:03:07.494 [199/766] Linking static target lib/librte_cfgfile.a 00:03:07.754 [200/766] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:07.754 [201/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:07.754 [202/766] Linking target lib/librte_ring.so.25.0 00:03:07.754 [203/766] Linking target lib/librte_meter.so.25.0 00:03:07.754 [204/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:07.754 [205/766] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:07.754 [206/766] Linking target lib/librte_pci.so.25.0 00:03:08.014 [207/766] Linking target lib/librte_rcu.so.25.0 00:03:08.014 [208/766] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:08.014 [209/766] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.014 [210/766] Linking target lib/librte_mempool.so.25.0 00:03:08.014 [211/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:08.014 [212/766] Linking target lib/librte_timer.so.25.0 00:03:08.014 [213/766] Linking target lib/librte_cfgfile.so.25.0 00:03:08.014 [214/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:08.014 [215/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:08.014 [216/766] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:08.014 [217/766] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:08.014 [218/766] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:08.014 [219/766] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:08.014 [220/766] Linking target lib/librte_mbuf.so.25.0 00:03:08.274 [221/766] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:08.274 [222/766] Linking target lib/librte_net.so.25.0 00:03:08.274 [223/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:08.274 [224/766] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:08.274 [225/766] Linking target lib/librte_bbdev.so.25.0 00:03:08.274 [226/766] Linking target lib/librte_cmdline.so.25.0 00:03:08.274 [227/766] Linking static target lib/librte_bpf.a 00:03:08.274 [228/766] Linking target lib/librte_hash.so.25.0 00:03:08.274 [229/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:08.533 [230/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:08.533 [231/766] Linking static target lib/librte_compressdev.a 00:03:08.533 [232/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:08.533 [233/766] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:08.533 [234/766] Linking static target lib/librte_acl.a 00:03:08.533 [235/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:08.533 [236/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:08.533 [237/766] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.791 [238/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:08.791 [239/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:08.791 [240/766] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.791 [241/766] Linking static target lib/librte_distributor.a 00:03:08.791 [242/766] Linking target lib/librte_acl.so.25.0 00:03:08.791 [243/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.791 [244/766] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.791 [245/766] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:08.791 [246/766] Linking target lib/librte_compressdev.so.25.0 00:03:09.049 [247/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:09.049 [248/766] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.049 [249/766] Linking target lib/librte_distributor.so.25.0 00:03:09.049 [250/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.049 [251/766] Linking static target lib/librte_dmadev.a 00:03:09.308 [252/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:09.308 [253/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:09.566 [254/766] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:09.567 [255/766] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.567 [256/766] Linking static target lib/librte_efd.a 00:03:09.567 [257/766] Linking target lib/librte_dmadev.so.25.0 00:03:09.825 [258/766] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:09.825 [259/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:09.825 [260/766] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.825 [261/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:09.825 [262/766] Linking target lib/librte_efd.so.25.0 00:03:09.825 [263/766] Linking static target lib/librte_cryptodev.a 00:03:09.825 [264/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:10.084 [265/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:10.342 [266/766] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:10.342 [267/766] Linking static target lib/librte_dispatcher.a 00:03:10.342 [268/766] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:10.342 [269/766] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:10.342 [270/766] Linking static target lib/librte_gpudev.a 00:03:10.342 [271/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:10.342 [272/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:10.599 [273/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:10.599 [274/766] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.599 [275/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:10.857 [276/766] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:10.857 [277/766] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.857 [278/766] Linking target lib/librte_cryptodev.so.25.0 00:03:10.857 [279/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:10.857 [280/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:10.857 [281/766] Linking static target lib/librte_gro.a 00:03:10.857 [282/766] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:11.116 [283/766] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:11.116 [284/766] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.116 [285/766] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:11.116 [286/766] Linking target lib/librte_gpudev.so.25.0 00:03:11.116 [287/766] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:11.116 [288/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:11.116 [289/766] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.116 [290/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:11.116 [291/766] Linking static target lib/librte_eventdev.a 00:03:11.374 [292/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:11.374 [293/766] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.374 [294/766] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:11.374 [295/766] Linking target lib/librte_ethdev.so.25.0 00:03:11.374 [296/766] Linking static target lib/librte_gso.a 00:03:11.374 [297/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:11.374 [298/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:11.374 [299/766] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:11.632 [300/766] Linking target lib/librte_metrics.so.25.0 00:03:11.632 [301/766] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.632 [302/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:11.632 [303/766] Linking target lib/librte_bpf.so.25.0 00:03:11.632 [304/766] Linking target lib/librte_gro.so.25.0 00:03:11.632 [305/766] Linking target lib/librte_gso.so.25.0 00:03:11.632 [306/766] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:11.632 [307/766] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:11.632 [308/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:11.632 [309/766] Linking static target lib/librte_jobstats.a 00:03:11.632 [310/766] Linking target lib/librte_bitratestats.so.25.0 00:03:11.632 [311/766] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:11.632 [312/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:11.632 [313/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:11.632 [314/766] Linking static target lib/librte_ip_frag.a 00:03:11.891 [315/766] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.891 [316/766] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:11.891 [317/766] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.891 [318/766] Linking static target lib/librte_latencystats.a 00:03:11.891 [319/766] Linking target lib/librte_jobstats.so.25.0 00:03:11.891 [320/766] Linking target lib/librte_ip_frag.so.25.0 00:03:11.891 [321/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:12.150 [322/766] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:12.150 [323/766] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:12.150 [324/766] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:12.150 [325/766] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:12.150 [326/766] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.150 [327/766] Linking target lib/librte_latencystats.so.25.0 00:03:12.150 [328/766] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:12.150 [329/766] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:03:12.409 [330/766] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:03:12.409 [331/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:12.409 [332/766] Linking static target lib/librte_lpm.a 00:03:12.409 [333/766] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:12.409 [334/766] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:12.668 [335/766] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:12.668 [336/766] Linking static target lib/librte_power.a 00:03:12.668 [337/766] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:12.668 [338/766] Linking static target lib/librte_pcapng.a 00:03:12.668 [339/766] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.668 [340/766] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:12.668 [341/766] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:12.668 [342/766] Linking static target lib/librte_rawdev.a 00:03:12.668 [343/766] Linking target lib/librte_lpm.so.25.0 00:03:12.668 [344/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:12.927 [345/766] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:12.927 [346/766] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:12.927 [347/766] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.927 [348/766] Linking static target lib/librte_regexdev.a 00:03:12.927 [349/766] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.927 [350/766] Linking target lib/librte_pcapng.so.25.0 00:03:12.927 [351/766] Linking target lib/librte_eventdev.so.25.0 00:03:12.927 [352/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:12.927 [353/766] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:12.927 [354/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:12.927 [355/766] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:13.186 [356/766] Linking target lib/librte_dispatcher.so.25.0 00:03:13.186 [357/766] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.186 [358/766] Linking target lib/librte_rawdev.so.25.0 00:03:13.186 [359/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:13.186 [360/766] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:13.444 [361/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:13.444 [362/766] Linking static target lib/librte_mldev.a 00:03:13.445 [363/766] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.445 [364/766] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:13.445 [365/766] Linking static target lib/librte_member.a 00:03:13.445 [366/766] Linking target lib/librte_power.so.25.0 00:03:13.445 [367/766] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.445 [368/766] Linking target lib/librte_regexdev.so.25.0 00:03:13.445 [369/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:13.445 [370/766] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:13.445 [371/766] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:13.445 [372/766] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:13.703 [373/766] Linking static target lib/librte_reorder.a 00:03:13.703 [374/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:13.704 [375/766] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:13.704 [376/766] Linking static target lib/librte_rib.a 00:03:13.704 [377/766] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.704 [378/766] Linking target lib/librte_member.so.25.0 00:03:13.704 [379/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:13.704 [380/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:13.704 [381/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:13.704 [382/766] Linking static target lib/librte_stack.a 00:03:13.962 [383/766] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.962 [384/766] Linking target lib/librte_reorder.so.25.0 00:03:13.962 [385/766] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.962 [386/766] Linking target lib/librte_rib.so.25.0 00:03:13.962 [387/766] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:13.962 [388/766] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:13.962 [389/766] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.962 [390/766] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:13.962 [391/766] Linking target lib/librte_stack.so.25.0 00:03:13.962 [392/766] Linking static target lib/librte_security.a 00:03:13.962 [393/766] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:14.221 [394/766] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:14.221 [395/766] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:14.479 [396/766] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:14.479 [397/766] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.479 [398/766] Linking target lib/librte_security.so.25.0 00:03:14.479 [399/766] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:14.479 [400/766] Linking static target lib/librte_sched.a 00:03:14.479 [401/766] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.479 [402/766] Linking target lib/librte_mldev.so.25.0 00:03:14.479 [403/766] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:14.737 [404/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:14.737 [405/766] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.737 [406/766] Linking target lib/librte_sched.so.25.0 00:03:14.737 [407/766] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:14.737 [408/766] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:14.996 [409/766] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:14.996 [410/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:14.996 [411/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:15.255 [412/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:15.255 [413/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:15.514 [414/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:15.514 [415/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:15.514 [416/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:15.514 [417/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:15.772 [418/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:15.772 [419/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:15.772 [420/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:15.772 [421/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:15.772 [422/766] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:16.031 [423/766] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:16.031 [424/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:16.031 [425/766] Linking static target lib/librte_ipsec.a 00:03:16.291 [426/766] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.291 [427/766] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:16.291 [428/766] Linking target lib/librte_ipsec.so.25.0 00:03:16.551 [429/766] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:16.551 [430/766] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:16.551 [431/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:16.551 [432/766] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:16.551 [433/766] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:16.810 [434/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:16.810 [435/766] Linking static target lib/librte_pdcp.a 00:03:16.810 [436/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:16.810 [437/766] Linking static target lib/librte_fib.a 00:03:16.810 [438/766] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:16.810 [439/766] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:17.069 [440/766] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.070 [441/766] Linking target lib/librte_pdcp.so.25.0 00:03:17.070 [442/766] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:17.070 [443/766] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.070 [444/766] Linking target lib/librte_fib.so.25.0 00:03:17.329 [445/766] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:17.329 [446/766] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:17.329 [447/766] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:17.588 [448/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:17.588 [449/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:17.588 [450/766] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:17.847 [451/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:17.847 [452/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:17.847 [453/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:18.106 [454/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:18.106 [455/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:18.106 [456/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:18.106 [457/766] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:18.106 [458/766] Linking static target lib/librte_pdump.a 00:03:18.106 [459/766] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:18.106 [460/766] Linking static target lib/librte_port.a 00:03:18.106 [461/766] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:18.365 [462/766] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:18.365 [463/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:18.365 [464/766] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.365 [465/766] Linking target lib/librte_pdump.so.25.0 00:03:18.365 [466/766] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.624 [467/766] Linking target lib/librte_port.so.25.0 00:03:18.624 [468/766] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:18.624 [469/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:18.624 [470/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:18.624 [471/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:18.624 [472/766] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:18.883 [473/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:18.883 [474/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:18.883 [475/766] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:19.142 [476/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:19.142 [477/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:19.142 [478/766] Linking static target lib/librte_table.a 00:03:19.142 [479/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:19.400 [480/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:19.400 [481/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:19.659 [482/766] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:19.659 [483/766] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.659 [484/766] Linking target lib/librte_table.so.25.0 00:03:19.659 [485/766] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:19.919 [486/766] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:19.919 [487/766] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:19.919 [488/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:19.919 [489/766] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:20.178 [490/766] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:20.178 [491/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:20.178 [492/766] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:20.437 [493/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:20.437 [494/766] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:20.695 [495/766] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:20.695 [496/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:20.695 [497/766] Linking static target lib/librte_graph.a 00:03:20.695 [498/766] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:20.695 [499/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:20.695 [500/766] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:20.954 [501/766] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:21.213 [502/766] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:21.213 [503/766] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:21.213 [504/766] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.213 [505/766] Linking target lib/librte_graph.so.25.0 00:03:21.213 [506/766] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:21.472 [507/766] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:21.472 [508/766] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:21.472 [509/766] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:21.472 [510/766] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:21.731 [511/766] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:21.731 [512/766] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:21.731 [513/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.731 [514/766] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:21.731 [515/766] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:21.991 [516/766] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:21.991 [517/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.991 [518/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.991 [519/766] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:21.991 [520/766] Linking static target lib/librte_node.a 00:03:21.991 [521/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.250 [522/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:22.250 [523/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:22.250 [524/766] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.250 [525/766] Linking target lib/librte_node.so.25.0 00:03:22.509 [526/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.509 [527/766] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.509 [528/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:22.509 [529/766] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:22.509 [530/766] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.509 [531/766] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.509 [532/766] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.509 [533/766] Linking static target drivers/librte_bus_pci.a 00:03:22.769 [534/766] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.769 [535/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:22.769 [536/766] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.769 [537/766] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.769 [538/766] Linking static target drivers/librte_bus_vdev.a 00:03:22.769 [539/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:22.769 [540/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:22.769 [541/766] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.769 [542/766] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:23.028 [543/766] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.028 [544/766] Linking target drivers/librte_bus_vdev.so.25.0 00:03:23.028 [545/766] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:23.028 [546/766] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:23.028 [547/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:23.028 [548/766] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.028 [549/766] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.028 [550/766] Linking static target drivers/librte_mempool_ring.a 00:03:23.028 [551/766] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.028 [552/766] Linking target drivers/librte_mempool_ring.so.25.0 00:03:23.028 [553/766] Linking target drivers/librte_bus_pci.so.25.0 00:03:23.288 [554/766] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:23.288 [555/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:23.547 [556/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:23.806 [557/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:23.806 [558/766] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:24.066 [559/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:24.634 [560/766] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:24.635 [561/766] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:24.635 [562/766] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:24.635 [563/766] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:24.635 [564/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:24.635 [565/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:24.893 [566/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:24.893 [567/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:25.152 [568/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:25.152 [569/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:25.152 [570/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:25.412 [571/766] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:25.412 [572/766] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:25.412 [573/766] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:25.412 [574/766] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:25.412 [575/766] Linking static target drivers/librte_power_acpi.a 00:03:25.412 [576/766] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:25.412 [577/766] Linking target drivers/librte_power_acpi.so.25.0 00:03:25.726 [578/766] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:25.726 [579/766] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:25.726 [580/766] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:25.726 [581/766] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:25.726 [582/766] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:25.726 [583/766] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:25.726 [584/766] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:25.726 [585/766] Linking static target drivers/librte_power_amd_pstate.a 00:03:25.726 [586/766] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:25.726 [587/766] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:25.726 [588/766] Linking static target drivers/librte_power_cppc.a 00:03:26.047 [589/766] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:26.047 [590/766] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:26.047 [591/766] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:26.047 [592/766] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:26.047 [593/766] Linking target drivers/librte_power_cppc.so.25.0 00:03:26.047 [594/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:26.047 [595/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:26.047 [596/766] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:26.047 [597/766] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:26.047 [598/766] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:26.047 [599/766] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:26.047 [600/766] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:26.047 [601/766] Linking static target drivers/librte_power_intel_pstate.a 00:03:26.047 [602/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:26.047 [603/766] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:26.047 [604/766] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:26.047 [605/766] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:26.047 [606/766] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:26.047 [607/766] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:26.328 [608/766] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:26.328 [609/766] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:26.328 [610/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:26.328 [611/766] Linking static target drivers/librte_power_intel_uncore.a 00:03:26.328 [612/766] Linking static target drivers/librte_power_kvm_vm.a 00:03:26.328 [613/766] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:26.328 [614/766] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:26.328 [615/766] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:26.328 [616/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:26.328 [617/766] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.329 [618/766] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:26.587 [619/766] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:26.587 [620/766] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:26.587 [621/766] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:26.587 [622/766] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:26.844 [623/766] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:26.844 [624/766] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:26.844 [625/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:26.844 [626/766] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:26.844 [627/766] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:26.844 [628/766] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:27.103 [629/766] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:27.103 [630/766] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:27.103 [631/766] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:27.103 [632/766] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:27.103 [633/766] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:27.360 [634/766] Linking static target drivers/librte_net_i40e.a 00:03:27.361 [635/766] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:27.361 [636/766] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:27.361 [637/766] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:27.361 [638/766] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:27.361 [639/766] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:27.361 [640/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:27.619 [641/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:27.619 [642/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:27.619 [643/766] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:27.877 [644/766] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.877 [645/766] Linking target drivers/librte_net_i40e.so.25.0 00:03:27.877 [646/766] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:27.877 [647/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:28.135 [648/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:28.135 [649/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:28.393 [650/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.393 [651/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:28.393 [652/766] Linking static target lib/librte_vhost.a 00:03:28.393 [653/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:28.393 [654/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:28.650 [655/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:28.651 [656/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:28.909 [657/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:28.909 [658/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:28.909 [659/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:29.167 [660/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:29.167 [661/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:29.167 [662/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:29.167 [663/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:29.425 [664/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:29.425 [665/766] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.425 [666/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:29.425 [667/766] Linking target lib/librte_vhost.so.25.0 00:03:29.425 [668/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:29.683 [669/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:29.683 [670/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:29.683 [671/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:29.683 [672/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:29.683 [673/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:29.941 [674/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:30.199 [675/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:30.199 [676/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:30.199 [677/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:30.765 [678/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:30.765 [679/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:31.023 [680/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:31.023 [681/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:31.023 [682/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:31.023 [683/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:31.023 [684/766] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:31.023 [685/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:31.023 [686/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:31.296 [687/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:31.296 [688/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:31.296 [689/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:31.296 [690/766] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:31.554 [691/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:31.554 [692/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:31.554 [693/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:31.554 [694/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:31.812 [695/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:31.812 [696/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:31.812 [697/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:31.812 [698/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.071 [699/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:32.071 [700/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:32.071 [701/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:32.329 [702/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:32.329 [703/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:32.329 [704/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:32.329 [705/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:32.329 [706/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:32.587 [707/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:32.587 [708/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:32.587 [709/766] Linking static target lib/librte_pipeline.a 00:03:32.587 [710/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:32.846 [711/766] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.846 [712/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:32.846 [713/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:33.104 [714/766] Linking target app/dpdk-dumpcap 00:03:33.104 [715/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:33.104 [716/766] Linking target app/dpdk-graph 00:03:33.104 [717/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:33.104 [718/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.362 [719/766] Linking target app/dpdk-pdump 00:03:33.362 [720/766] Linking target app/dpdk-proc-info 00:03:33.362 [721/766] Linking target app/dpdk-test-acl 00:03:33.362 [722/766] Linking target app/dpdk-test-cmdline 00:03:33.620 [723/766] Linking target app/dpdk-test-compress-perf 00:03:33.620 [724/766] Linking target app/dpdk-test-crypto-perf 00:03:33.620 [725/766] Linking target app/dpdk-test-bbdev 00:03:33.620 [726/766] Linking target app/dpdk-test-dma-perf 00:03:33.620 [727/766] Linking target app/dpdk-test-eventdev 00:03:33.878 [728/766] Linking target app/dpdk-test-fib 00:03:33.878 [729/766] Linking target app/dpdk-test-flow-perf 00:03:33.878 [730/766] Linking target app/dpdk-test-gpudev 00:03:33.878 [731/766] Linking target app/dpdk-test-mldev 00:03:34.136 [732/766] Linking target app/dpdk-test-pipeline 00:03:34.136 [733/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:34.394 [734/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:34.394 [735/766] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:34.394 [736/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:34.394 [737/766] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:34.652 [738/766] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:34.652 [739/766] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:34.910 [740/766] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:34.910 [741/766] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.910 [742/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:34.910 [743/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:34.910 [744/766] Linking target lib/librte_pipeline.so.25.0 00:03:35.168 [745/766] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:35.168 [746/766] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:35.168 [747/766] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:35.426 [748/766] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:35.426 [749/766] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:35.684 [750/766] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:35.684 [751/766] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:35.684 [752/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:35.941 [753/766] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:35.941 [754/766] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:36.199 [755/766] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:36.199 [756/766] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:36.199 [757/766] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:36.199 [758/766] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:36.199 [759/766] Linking target app/dpdk-test-sad 00:03:36.199 [760/766] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:36.199 [761/766] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:36.458 [762/766] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:36.458 [763/766] Linking target app/dpdk-test-regex 00:03:36.717 [764/766] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:36.717 [765/766] Linking target app/dpdk-testpmd 00:03:36.976 [766/766] Linking target app/dpdk-test-security-perf 00:03:36.976 18:44:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:36.976 18:44:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.976 18:44:06 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:36.976 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:36.976 [0/1] Installing files. 00:03:37.235 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:37.235 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.235 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.236 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.499 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.500 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.501 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.502 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.503 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.504 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.504 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.504 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.505 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.769 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.769 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.773 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:37.773 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:37.773 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:37.773 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:37.773 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:37.773 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:37.773 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:37.773 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:37.773 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:37.773 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:37.773 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:37.773 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:37.773 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:37.773 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:37.773 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:37.773 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:37.773 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:37.773 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:37.773 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:37.773 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:37.773 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:37.773 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:37.773 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:37.773 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:37.773 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:37.773 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:37.773 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:37.773 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:37.773 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:37.773 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:37.773 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:37.773 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:37.773 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:37.773 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:37.773 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:37.773 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:37.773 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:37.773 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:37.773 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:37.773 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:37.773 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:37.773 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:37.773 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:37.773 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:37.773 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:37.773 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:37.773 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:37.773 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:37.773 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:37.773 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:37.773 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:37.773 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:37.773 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:37.773 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:37.773 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:37.773 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:37.773 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:37.773 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:37.773 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:37.773 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:37.773 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:37.773 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:37.773 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:37.773 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:37.773 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:37.773 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:37.773 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:37.773 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:37.773 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:37.773 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:37.773 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:37.773 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:37.773 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:37.773 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:37.773 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:37.773 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:37.773 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:37.773 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:37.773 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:37.773 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:37.773 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:37.773 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:37.773 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:37.774 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:37.774 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:37.774 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:37.774 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:37.774 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:37.774 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:37.774 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:37.774 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:37.774 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:37.774 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:37.774 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:37.774 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:37.774 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:37.774 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:37.774 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:37.774 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:37.774 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:37.774 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:37.774 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:37.774 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:37.774 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:37.774 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:37.774 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:37.774 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:37.774 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:37.774 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:37.774 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:37.774 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:37.774 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:37.774 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:37.774 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:37.774 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:37.774 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:37.774 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:37.774 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:37.774 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:37.774 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:37.774 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:37.774 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:37.774 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:37.774 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:37.774 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:37.774 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:37.774 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:37.774 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:37.774 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:37.774 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:37.774 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:37.774 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:37.774 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:37.774 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:37.774 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:37.774 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:37.774 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:37.774 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:37.774 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:37.774 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:37.774 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:37.774 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:37.774 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:37.774 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:37.774 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:37.774 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:37.774 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:37.774 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:37.774 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:37.774 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:37.774 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:37.774 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:37.774 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:37.774 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:37.774 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:37.774 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:37.774 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:37.774 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:37.774 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:37.774 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:37.774 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:37.774 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:37.774 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:37.774 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:37.774 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:38.033 18:44:07 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:38.033 ************************************ 00:03:38.033 END TEST build_native_dpdk 00:03:38.033 ************************************ 00:03:38.033 18:44:07 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:38.033 00:03:38.033 real 0m46.053s 00:03:38.033 user 5m11.413s 00:03:38.033 sys 0m57.290s 00:03:38.033 18:44:07 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:38.033 18:44:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:38.033 18:44:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:38.033 18:44:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:38.033 18:44:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:38.292 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:38.292 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.292 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:38.292 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:38.858 Using 'verbs' RDMA provider 00:03:54.673 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:12.782 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:12.782 Creating mk/config.mk...done. 00:04:12.782 Creating mk/cc.flags.mk...done. 00:04:12.782 Type 'make' to build. 00:04:12.782 18:44:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:12.782 18:44:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:12.782 18:44:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:12.782 18:44:40 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.782 ************************************ 00:04:12.782 START TEST make 00:04:12.782 ************************************ 00:04:12.782 18:44:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:12.782 make[1]: Nothing to be done for 'all'. 00:04:59.471 CC lib/log/log.o 00:04:59.471 CC lib/log/log_flags.o 00:04:59.471 CC lib/log/log_deprecated.o 00:04:59.471 CC lib/ut/ut.o 00:04:59.471 CC lib/ut_mock/mock.o 00:04:59.471 LIB libspdk_log.a 00:04:59.471 LIB libspdk_ut.a 00:04:59.471 LIB libspdk_ut_mock.a 00:04:59.471 SO libspdk_log.so.7.1 00:04:59.471 SO libspdk_ut_mock.so.6.0 00:04:59.471 SO libspdk_ut.so.2.0 00:04:59.471 SYMLINK libspdk_ut_mock.so 00:04:59.471 SYMLINK libspdk_ut.so 00:04:59.471 SYMLINK libspdk_log.so 00:04:59.471 CC lib/dma/dma.o 00:04:59.471 CC lib/ioat/ioat.o 00:04:59.471 CC lib/util/base64.o 00:04:59.471 CC lib/util/cpuset.o 00:04:59.471 CC lib/util/crc32.o 00:04:59.472 CC lib/util/bit_array.o 00:04:59.472 CXX lib/trace_parser/trace.o 00:04:59.472 CC lib/util/crc16.o 00:04:59.472 CC lib/util/crc32c.o 00:04:59.472 CC lib/vfio_user/host/vfio_user_pci.o 00:04:59.472 CC lib/vfio_user/host/vfio_user.o 00:04:59.472 CC lib/util/crc32_ieee.o 00:04:59.472 CC lib/util/crc64.o 00:04:59.472 CC lib/util/dif.o 00:04:59.472 LIB libspdk_dma.a 00:04:59.472 CC lib/util/fd.o 00:04:59.472 SO libspdk_dma.so.5.0 00:04:59.472 CC lib/util/fd_group.o 00:04:59.472 CC lib/util/file.o 00:04:59.472 CC lib/util/hexlify.o 00:04:59.472 SYMLINK libspdk_dma.so 00:04:59.472 LIB libspdk_ioat.a 00:04:59.472 CC lib/util/iov.o 00:04:59.472 SO libspdk_ioat.so.7.0 00:04:59.472 CC lib/util/math.o 00:04:59.472 SYMLINK libspdk_ioat.so 00:04:59.472 CC lib/util/net.o 00:04:59.472 LIB libspdk_vfio_user.a 00:04:59.472 CC lib/util/pipe.o 00:04:59.472 SO libspdk_vfio_user.so.5.0 00:04:59.472 CC lib/util/strerror_tls.o 00:04:59.472 CC lib/util/string.o 00:04:59.472 SYMLINK libspdk_vfio_user.so 00:04:59.472 CC lib/util/uuid.o 00:04:59.472 CC lib/util/xor.o 00:04:59.472 CC lib/util/zipf.o 00:04:59.472 CC lib/util/md5.o 00:04:59.472 LIB libspdk_util.a 00:04:59.472 LIB libspdk_trace_parser.a 00:04:59.472 SO libspdk_trace_parser.so.6.0 00:04:59.472 SO libspdk_util.so.10.1 00:04:59.472 SYMLINK libspdk_trace_parser.so 00:04:59.472 SYMLINK libspdk_util.so 00:04:59.472 CC lib/json/json_parse.o 00:04:59.472 CC lib/json/json_write.o 00:04:59.472 CC lib/json/json_util.o 00:04:59.472 CC lib/idxd/idxd_user.o 00:04:59.472 CC lib/idxd/idxd.o 00:04:59.472 CC lib/idxd/idxd_kernel.o 00:04:59.472 CC lib/env_dpdk/env.o 00:04:59.472 CC lib/conf/conf.o 00:04:59.472 CC lib/rdma_utils/rdma_utils.o 00:04:59.472 CC lib/vmd/vmd.o 00:04:59.472 CC lib/env_dpdk/memory.o 00:04:59.472 LIB libspdk_conf.a 00:04:59.472 CC lib/vmd/led.o 00:04:59.472 CC lib/env_dpdk/pci.o 00:04:59.472 CC lib/env_dpdk/init.o 00:04:59.472 SO libspdk_conf.so.6.0 00:04:59.472 LIB libspdk_rdma_utils.a 00:04:59.472 LIB libspdk_json.a 00:04:59.472 SO libspdk_rdma_utils.so.1.0 00:04:59.472 SYMLINK libspdk_conf.so 00:04:59.472 CC lib/env_dpdk/threads.o 00:04:59.472 SO libspdk_json.so.6.0 00:04:59.472 SYMLINK libspdk_rdma_utils.so 00:04:59.472 CC lib/env_dpdk/pci_ioat.o 00:04:59.472 SYMLINK libspdk_json.so 00:04:59.472 CC lib/env_dpdk/pci_virtio.o 00:04:59.472 CC lib/env_dpdk/pci_vmd.o 00:04:59.472 CC lib/env_dpdk/pci_idxd.o 00:04:59.472 CC lib/env_dpdk/pci_event.o 00:04:59.472 CC lib/rdma_provider/common.o 00:04:59.472 CC lib/env_dpdk/sigbus_handler.o 00:04:59.472 CC lib/env_dpdk/pci_dpdk.o 00:04:59.472 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:59.472 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:59.472 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:59.472 LIB libspdk_idxd.a 00:04:59.472 SO libspdk_idxd.so.12.1 00:04:59.472 LIB libspdk_vmd.a 00:04:59.472 SO libspdk_vmd.so.6.0 00:04:59.472 SYMLINK libspdk_idxd.so 00:04:59.472 SYMLINK libspdk_vmd.so 00:04:59.472 CC lib/jsonrpc/jsonrpc_server.o 00:04:59.472 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:59.472 CC lib/jsonrpc/jsonrpc_client.o 00:04:59.472 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:59.472 LIB libspdk_rdma_provider.a 00:04:59.472 SO libspdk_rdma_provider.so.7.0 00:04:59.472 SYMLINK libspdk_rdma_provider.so 00:04:59.472 LIB libspdk_jsonrpc.a 00:04:59.472 SO libspdk_jsonrpc.so.6.0 00:04:59.472 SYMLINK libspdk_jsonrpc.so 00:04:59.472 LIB libspdk_env_dpdk.a 00:04:59.472 SO libspdk_env_dpdk.so.15.1 00:04:59.472 CC lib/rpc/rpc.o 00:04:59.472 SYMLINK libspdk_env_dpdk.so 00:04:59.472 LIB libspdk_rpc.a 00:04:59.472 SO libspdk_rpc.so.6.0 00:04:59.472 SYMLINK libspdk_rpc.so 00:04:59.472 CC lib/keyring/keyring_rpc.o 00:04:59.472 CC lib/keyring/keyring.o 00:04:59.472 CC lib/notify/notify.o 00:04:59.472 CC lib/notify/notify_rpc.o 00:04:59.472 CC lib/trace/trace.o 00:04:59.472 CC lib/trace/trace_flags.o 00:04:59.472 CC lib/trace/trace_rpc.o 00:04:59.472 LIB libspdk_notify.a 00:04:59.472 SO libspdk_notify.so.6.0 00:04:59.472 LIB libspdk_keyring.a 00:04:59.472 LIB libspdk_trace.a 00:04:59.472 SYMLINK libspdk_notify.so 00:04:59.472 SO libspdk_keyring.so.2.0 00:04:59.472 SO libspdk_trace.so.11.0 00:04:59.472 SYMLINK libspdk_keyring.so 00:04:59.472 SYMLINK libspdk_trace.so 00:04:59.472 CC lib/thread/iobuf.o 00:04:59.472 CC lib/thread/thread.o 00:04:59.472 CC lib/sock/sock.o 00:04:59.472 CC lib/sock/sock_rpc.o 00:04:59.472 LIB libspdk_sock.a 00:04:59.472 SO libspdk_sock.so.10.0 00:04:59.472 SYMLINK libspdk_sock.so 00:04:59.472 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:59.472 CC lib/nvme/nvme_ctrlr.o 00:04:59.472 CC lib/nvme/nvme_ns_cmd.o 00:04:59.472 CC lib/nvme/nvme_fabric.o 00:04:59.472 CC lib/nvme/nvme_ns.o 00:04:59.472 CC lib/nvme/nvme_pcie_common.o 00:04:59.472 CC lib/nvme/nvme_pcie.o 00:04:59.472 CC lib/nvme/nvme.o 00:04:59.472 CC lib/nvme/nvme_qpair.o 00:04:59.732 LIB libspdk_thread.a 00:04:59.992 SO libspdk_thread.so.11.0 00:04:59.992 CC lib/nvme/nvme_quirks.o 00:04:59.992 CC lib/nvme/nvme_transport.o 00:04:59.992 SYMLINK libspdk_thread.so 00:04:59.992 CC lib/nvme/nvme_discovery.o 00:04:59.992 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:59.992 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:00.252 CC lib/nvme/nvme_tcp.o 00:05:00.252 CC lib/nvme/nvme_opal.o 00:05:00.252 CC lib/nvme/nvme_io_msg.o 00:05:00.252 CC lib/nvme/nvme_poll_group.o 00:05:00.550 CC lib/nvme/nvme_zns.o 00:05:00.550 CC lib/nvme/nvme_stubs.o 00:05:00.550 CC lib/blob/blobstore.o 00:05:00.550 CC lib/accel/accel.o 00:05:00.856 CC lib/nvme/nvme_auth.o 00:05:00.856 CC lib/nvme/nvme_cuse.o 00:05:00.856 CC lib/init/json_config.o 00:05:00.856 CC lib/init/subsystem.o 00:05:00.856 CC lib/init/subsystem_rpc.o 00:05:00.856 CC lib/init/rpc.o 00:05:01.127 CC lib/accel/accel_rpc.o 00:05:01.127 CC lib/accel/accel_sw.o 00:05:01.127 LIB libspdk_init.a 00:05:01.127 SO libspdk_init.so.6.0 00:05:01.127 SYMLINK libspdk_init.so 00:05:01.127 CC lib/blob/request.o 00:05:01.387 CC lib/virtio/virtio.o 00:05:01.387 CC lib/blob/zeroes.o 00:05:01.387 CC lib/fsdev/fsdev.o 00:05:01.647 CC lib/blob/blob_bs_dev.o 00:05:01.647 CC lib/nvme/nvme_rdma.o 00:05:01.647 CC lib/fsdev/fsdev_io.o 00:05:01.647 CC lib/virtio/virtio_vhost_user.o 00:05:01.647 CC lib/virtio/virtio_vfio_user.o 00:05:01.647 CC lib/virtio/virtio_pci.o 00:05:01.647 CC lib/event/app.o 00:05:01.647 LIB libspdk_accel.a 00:05:01.647 CC lib/fsdev/fsdev_rpc.o 00:05:01.647 SO libspdk_accel.so.16.0 00:05:01.906 SYMLINK libspdk_accel.so 00:05:01.906 CC lib/event/reactor.o 00:05:01.906 CC lib/event/log_rpc.o 00:05:01.907 CC lib/event/app_rpc.o 00:05:01.907 CC lib/event/scheduler_static.o 00:05:01.907 LIB libspdk_virtio.a 00:05:01.907 SO libspdk_virtio.so.7.0 00:05:01.907 SYMLINK libspdk_virtio.so 00:05:01.907 CC lib/bdev/bdev.o 00:05:01.907 CC lib/bdev/bdev_rpc.o 00:05:01.907 CC lib/bdev/bdev_zone.o 00:05:01.907 CC lib/bdev/part.o 00:05:02.166 LIB libspdk_fsdev.a 00:05:02.166 CC lib/bdev/scsi_nvme.o 00:05:02.166 SO libspdk_fsdev.so.2.0 00:05:02.166 SYMLINK libspdk_fsdev.so 00:05:02.166 LIB libspdk_event.a 00:05:02.166 SO libspdk_event.so.14.0 00:05:02.426 SYMLINK libspdk_event.so 00:05:02.426 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:02.686 LIB libspdk_nvme.a 00:05:02.945 SO libspdk_nvme.so.15.0 00:05:02.945 LIB libspdk_fuse_dispatcher.a 00:05:02.945 SO libspdk_fuse_dispatcher.so.1.0 00:05:02.945 SYMLINK libspdk_fuse_dispatcher.so 00:05:03.205 SYMLINK libspdk_nvme.so 00:05:03.775 LIB libspdk_blob.a 00:05:03.775 SO libspdk_blob.so.12.0 00:05:04.035 SYMLINK libspdk_blob.so 00:05:04.294 CC lib/blobfs/blobfs.o 00:05:04.294 CC lib/blobfs/tree.o 00:05:04.294 CC lib/lvol/lvol.o 00:05:04.863 LIB libspdk_bdev.a 00:05:04.863 SO libspdk_bdev.so.17.0 00:05:05.123 SYMLINK libspdk_bdev.so 00:05:05.123 LIB libspdk_blobfs.a 00:05:05.123 SO libspdk_blobfs.so.11.0 00:05:05.123 SYMLINK libspdk_blobfs.so 00:05:05.383 LIB libspdk_lvol.a 00:05:05.383 SO libspdk_lvol.so.11.0 00:05:05.383 CC lib/scsi/dev.o 00:05:05.383 CC lib/scsi/lun.o 00:05:05.383 CC lib/scsi/port.o 00:05:05.383 CC lib/scsi/scsi_bdev.o 00:05:05.383 CC lib/nbd/nbd.o 00:05:05.383 CC lib/scsi/scsi.o 00:05:05.383 CC lib/ftl/ftl_core.o 00:05:05.383 CC lib/ublk/ublk.o 00:05:05.383 CC lib/nvmf/ctrlr.o 00:05:05.383 SYMLINK libspdk_lvol.so 00:05:05.383 CC lib/ublk/ublk_rpc.o 00:05:05.383 CC lib/scsi/scsi_pr.o 00:05:05.383 CC lib/scsi/scsi_rpc.o 00:05:05.383 CC lib/scsi/task.o 00:05:05.383 CC lib/ftl/ftl_init.o 00:05:05.642 CC lib/ftl/ftl_layout.o 00:05:05.642 CC lib/ftl/ftl_debug.o 00:05:05.642 CC lib/ftl/ftl_io.o 00:05:05.642 CC lib/nbd/nbd_rpc.o 00:05:05.642 CC lib/ftl/ftl_sb.o 00:05:05.642 CC lib/ftl/ftl_l2p.o 00:05:05.642 CC lib/ftl/ftl_l2p_flat.o 00:05:05.902 LIB libspdk_scsi.a 00:05:05.902 CC lib/ftl/ftl_nv_cache.o 00:05:05.902 SO libspdk_scsi.so.9.0 00:05:05.902 LIB libspdk_nbd.a 00:05:05.902 CC lib/nvmf/ctrlr_discovery.o 00:05:05.902 CC lib/nvmf/ctrlr_bdev.o 00:05:05.902 SO libspdk_nbd.so.7.0 00:05:05.902 CC lib/nvmf/subsystem.o 00:05:05.902 SYMLINK libspdk_scsi.so 00:05:05.902 CC lib/nvmf/nvmf.o 00:05:05.902 CC lib/nvmf/nvmf_rpc.o 00:05:05.902 SYMLINK libspdk_nbd.so 00:05:05.902 CC lib/nvmf/transport.o 00:05:05.902 LIB libspdk_ublk.a 00:05:05.902 SO libspdk_ublk.so.3.0 00:05:06.161 SYMLINK libspdk_ublk.so 00:05:06.161 CC lib/nvmf/tcp.o 00:05:06.161 CC lib/iscsi/conn.o 00:05:06.421 CC lib/nvmf/stubs.o 00:05:06.681 CC lib/nvmf/mdns_server.o 00:05:06.681 CC lib/nvmf/rdma.o 00:05:06.681 CC lib/iscsi/init_grp.o 00:05:06.681 CC lib/iscsi/iscsi.o 00:05:06.681 CC lib/ftl/ftl_band.o 00:05:06.681 CC lib/nvmf/auth.o 00:05:06.941 CC lib/iscsi/param.o 00:05:06.941 CC lib/iscsi/portal_grp.o 00:05:06.941 CC lib/iscsi/tgt_node.o 00:05:07.201 CC lib/ftl/ftl_band_ops.o 00:05:07.201 CC lib/iscsi/iscsi_subsystem.o 00:05:07.201 CC lib/ftl/ftl_writer.o 00:05:07.201 CC lib/iscsi/iscsi_rpc.o 00:05:07.201 CC lib/ftl/ftl_rq.o 00:05:07.461 CC lib/iscsi/task.o 00:05:07.461 CC lib/ftl/ftl_reloc.o 00:05:07.461 CC lib/ftl/ftl_l2p_cache.o 00:05:07.461 CC lib/ftl/ftl_p2l.o 00:05:07.461 CC lib/vhost/vhost.o 00:05:07.461 CC lib/vhost/vhost_rpc.o 00:05:07.721 CC lib/ftl/ftl_p2l_log.o 00:05:07.721 CC lib/ftl/mngt/ftl_mngt.o 00:05:07.721 CC lib/vhost/vhost_scsi.o 00:05:07.721 CC lib/vhost/vhost_blk.o 00:05:07.981 CC lib/vhost/rte_vhost_user.o 00:05:07.981 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:07.981 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:07.981 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:08.242 LIB libspdk_iscsi.a 00:05:08.242 SO libspdk_iscsi.so.8.0 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:08.242 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:08.501 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:08.501 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:08.501 SYMLINK libspdk_iscsi.so 00:05:08.501 CC lib/ftl/utils/ftl_conf.o 00:05:08.501 CC lib/ftl/utils/ftl_md.o 00:05:08.501 CC lib/ftl/utils/ftl_mempool.o 00:05:08.501 CC lib/ftl/utils/ftl_bitmap.o 00:05:08.501 CC lib/ftl/utils/ftl_property.o 00:05:08.501 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:08.501 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:08.501 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:08.762 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:08.762 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:08.762 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:08.762 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:08.762 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:08.762 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:08.762 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:08.762 LIB libspdk_vhost.a 00:05:08.762 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:08.762 LIB libspdk_nvmf.a 00:05:09.022 SO libspdk_vhost.so.8.0 00:05:09.022 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:09.022 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:09.022 CC lib/ftl/base/ftl_base_dev.o 00:05:09.022 CC lib/ftl/base/ftl_base_bdev.o 00:05:09.022 CC lib/ftl/ftl_trace.o 00:05:09.022 SYMLINK libspdk_vhost.so 00:05:09.022 SO libspdk_nvmf.so.20.0 00:05:09.281 LIB libspdk_ftl.a 00:05:09.281 SYMLINK libspdk_nvmf.so 00:05:09.539 SO libspdk_ftl.so.9.0 00:05:09.539 SYMLINK libspdk_ftl.so 00:05:10.108 CC module/env_dpdk/env_dpdk_rpc.o 00:05:10.108 CC module/accel/error/accel_error.o 00:05:10.108 CC module/keyring/file/keyring.o 00:05:10.108 CC module/accel/dsa/accel_dsa.o 00:05:10.108 CC module/accel/ioat/accel_ioat.o 00:05:10.108 CC module/blob/bdev/blob_bdev.o 00:05:10.108 CC module/sock/posix/posix.o 00:05:10.108 CC module/accel/iaa/accel_iaa.o 00:05:10.108 CC module/fsdev/aio/fsdev_aio.o 00:05:10.108 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:10.108 LIB libspdk_env_dpdk_rpc.a 00:05:10.108 SO libspdk_env_dpdk_rpc.so.6.0 00:05:10.108 SYMLINK libspdk_env_dpdk_rpc.so 00:05:10.108 CC module/accel/ioat/accel_ioat_rpc.o 00:05:10.367 CC module/keyring/file/keyring_rpc.o 00:05:10.367 CC module/accel/dsa/accel_dsa_rpc.o 00:05:10.367 CC module/accel/error/accel_error_rpc.o 00:05:10.367 LIB libspdk_scheduler_dynamic.a 00:05:10.367 CC module/accel/iaa/accel_iaa_rpc.o 00:05:10.367 LIB libspdk_accel_ioat.a 00:05:10.367 SO libspdk_scheduler_dynamic.so.4.0 00:05:10.367 SO libspdk_accel_ioat.so.6.0 00:05:10.367 LIB libspdk_keyring_file.a 00:05:10.367 LIB libspdk_blob_bdev.a 00:05:10.367 SO libspdk_keyring_file.so.2.0 00:05:10.367 SYMLINK libspdk_scheduler_dynamic.so 00:05:10.367 SO libspdk_blob_bdev.so.12.0 00:05:10.367 LIB libspdk_accel_dsa.a 00:05:10.367 SYMLINK libspdk_accel_ioat.so 00:05:10.367 SO libspdk_accel_dsa.so.5.0 00:05:10.367 LIB libspdk_accel_error.a 00:05:10.367 SYMLINK libspdk_keyring_file.so 00:05:10.367 LIB libspdk_accel_iaa.a 00:05:10.367 SYMLINK libspdk_blob_bdev.so 00:05:10.367 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:10.367 SO libspdk_accel_error.so.2.0 00:05:10.367 SYMLINK libspdk_accel_dsa.so 00:05:10.367 SO libspdk_accel_iaa.so.3.0 00:05:10.367 CC module/fsdev/aio/linux_aio_mgr.o 00:05:10.626 SYMLINK libspdk_accel_iaa.so 00:05:10.626 SYMLINK libspdk_accel_error.so 00:05:10.626 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:10.626 CC module/keyring/linux/keyring.o 00:05:10.626 CC module/scheduler/gscheduler/gscheduler.o 00:05:10.626 CC module/keyring/linux/keyring_rpc.o 00:05:10.626 LIB libspdk_scheduler_dpdk_governor.a 00:05:10.626 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:10.626 CC module/bdev/error/vbdev_error.o 00:05:10.626 CC module/bdev/delay/vbdev_delay.o 00:05:10.626 CC module/bdev/error/vbdev_error_rpc.o 00:05:10.626 LIB libspdk_scheduler_gscheduler.a 00:05:10.885 CC module/bdev/gpt/gpt.o 00:05:10.885 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:10.885 SO libspdk_scheduler_gscheduler.so.4.0 00:05:10.885 CC module/bdev/gpt/vbdev_gpt.o 00:05:10.885 LIB libspdk_fsdev_aio.a 00:05:10.885 CC module/blobfs/bdev/blobfs_bdev.o 00:05:10.885 LIB libspdk_keyring_linux.a 00:05:10.885 SYMLINK libspdk_scheduler_gscheduler.so 00:05:10.885 SO libspdk_fsdev_aio.so.1.0 00:05:10.885 SO libspdk_keyring_linux.so.1.0 00:05:10.885 LIB libspdk_sock_posix.a 00:05:10.885 SYMLINK libspdk_keyring_linux.so 00:05:10.885 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:10.885 SYMLINK libspdk_fsdev_aio.so 00:05:10.885 SO libspdk_sock_posix.so.6.0 00:05:10.885 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:10.885 LIB libspdk_bdev_error.a 00:05:10.885 SYMLINK libspdk_sock_posix.so 00:05:10.885 CC module/bdev/lvol/vbdev_lvol.o 00:05:10.885 SO libspdk_bdev_error.so.6.0 00:05:11.145 CC module/bdev/malloc/bdev_malloc.o 00:05:11.145 CC module/bdev/null/bdev_null.o 00:05:11.145 LIB libspdk_blobfs_bdev.a 00:05:11.145 LIB libspdk_bdev_gpt.a 00:05:11.145 SYMLINK libspdk_bdev_error.so 00:05:11.145 SO libspdk_blobfs_bdev.so.6.0 00:05:11.145 SO libspdk_bdev_gpt.so.6.0 00:05:11.145 CC module/bdev/nvme/bdev_nvme.o 00:05:11.145 LIB libspdk_bdev_delay.a 00:05:11.145 CC module/bdev/passthru/vbdev_passthru.o 00:05:11.145 SYMLINK libspdk_blobfs_bdev.so 00:05:11.145 SO libspdk_bdev_delay.so.6.0 00:05:11.145 SYMLINK libspdk_bdev_gpt.so 00:05:11.145 CC module/bdev/null/bdev_null_rpc.o 00:05:11.145 SYMLINK libspdk_bdev_delay.so 00:05:11.145 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:11.145 CC module/bdev/raid/bdev_raid.o 00:05:11.145 CC module/bdev/split/vbdev_split.o 00:05:11.145 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:11.404 LIB libspdk_bdev_null.a 00:05:11.404 SO libspdk_bdev_null.so.6.0 00:05:11.404 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:11.404 LIB libspdk_bdev_malloc.a 00:05:11.404 SYMLINK libspdk_bdev_null.so 00:05:11.404 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:11.404 CC module/bdev/aio/bdev_aio.o 00:05:11.404 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:11.404 SO libspdk_bdev_malloc.so.6.0 00:05:11.404 CC module/bdev/split/vbdev_split_rpc.o 00:05:11.404 SYMLINK libspdk_bdev_malloc.so 00:05:11.404 CC module/bdev/aio/bdev_aio_rpc.o 00:05:11.404 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:11.663 LIB libspdk_bdev_passthru.a 00:05:11.663 LIB libspdk_bdev_split.a 00:05:11.663 SO libspdk_bdev_passthru.so.6.0 00:05:11.663 LIB libspdk_bdev_zone_block.a 00:05:11.663 SO libspdk_bdev_split.so.6.0 00:05:11.663 CC module/bdev/ftl/bdev_ftl.o 00:05:11.663 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:11.663 SO libspdk_bdev_zone_block.so.6.0 00:05:11.663 SYMLINK libspdk_bdev_passthru.so 00:05:11.663 SYMLINK libspdk_bdev_split.so 00:05:11.663 CC module/bdev/nvme/nvme_rpc.o 00:05:11.663 CC module/bdev/raid/bdev_raid_rpc.o 00:05:11.663 SYMLINK libspdk_bdev_zone_block.so 00:05:11.663 CC module/bdev/raid/bdev_raid_sb.o 00:05:11.663 LIB libspdk_bdev_lvol.a 00:05:11.663 LIB libspdk_bdev_aio.a 00:05:11.663 SO libspdk_bdev_lvol.so.6.0 00:05:11.663 SO libspdk_bdev_aio.so.6.0 00:05:11.922 SYMLINK libspdk_bdev_aio.so 00:05:11.922 SYMLINK libspdk_bdev_lvol.so 00:05:11.922 CC module/bdev/nvme/bdev_mdns_client.o 00:05:11.922 CC module/bdev/raid/raid0.o 00:05:11.922 LIB libspdk_bdev_ftl.a 00:05:11.922 CC module/bdev/nvme/vbdev_opal.o 00:05:11.922 SO libspdk_bdev_ftl.so.6.0 00:05:11.922 CC module/bdev/iscsi/bdev_iscsi.o 00:05:11.922 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:11.922 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:11.922 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:11.922 SYMLINK libspdk_bdev_ftl.so 00:05:11.922 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:12.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:12.182 CC module/bdev/raid/raid1.o 00:05:12.182 CC module/bdev/raid/concat.o 00:05:12.182 CC module/bdev/raid/raid5f.o 00:05:12.182 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:12.182 LIB libspdk_bdev_iscsi.a 00:05:12.182 SO libspdk_bdev_iscsi.so.6.0 00:05:12.441 SYMLINK libspdk_bdev_iscsi.so 00:05:12.441 LIB libspdk_bdev_virtio.a 00:05:12.441 SO libspdk_bdev_virtio.so.6.0 00:05:12.701 SYMLINK libspdk_bdev_virtio.so 00:05:12.701 LIB libspdk_bdev_raid.a 00:05:12.701 SO libspdk_bdev_raid.so.6.0 00:05:12.961 SYMLINK libspdk_bdev_raid.so 00:05:13.902 LIB libspdk_bdev_nvme.a 00:05:13.902 SO libspdk_bdev_nvme.so.7.1 00:05:13.902 SYMLINK libspdk_bdev_nvme.so 00:05:14.472 CC module/event/subsystems/iobuf/iobuf.o 00:05:14.472 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:14.472 CC module/event/subsystems/sock/sock.o 00:05:14.472 CC module/event/subsystems/fsdev/fsdev.o 00:05:14.472 CC module/event/subsystems/vmd/vmd.o 00:05:14.472 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:14.472 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:14.472 CC module/event/subsystems/scheduler/scheduler.o 00:05:14.472 CC module/event/subsystems/keyring/keyring.o 00:05:14.733 LIB libspdk_event_keyring.a 00:05:14.733 LIB libspdk_event_sock.a 00:05:14.733 LIB libspdk_event_fsdev.a 00:05:14.733 LIB libspdk_event_vmd.a 00:05:14.733 LIB libspdk_event_scheduler.a 00:05:14.733 LIB libspdk_event_iobuf.a 00:05:14.733 LIB libspdk_event_vhost_blk.a 00:05:14.733 SO libspdk_event_keyring.so.1.0 00:05:14.733 SO libspdk_event_fsdev.so.1.0 00:05:14.733 SO libspdk_event_scheduler.so.4.0 00:05:14.733 SO libspdk_event_sock.so.5.0 00:05:14.733 SO libspdk_event_vhost_blk.so.3.0 00:05:14.733 SO libspdk_event_vmd.so.6.0 00:05:14.733 SO libspdk_event_iobuf.so.3.0 00:05:14.733 SYMLINK libspdk_event_keyring.so 00:05:14.733 SYMLINK libspdk_event_fsdev.so 00:05:14.733 SYMLINK libspdk_event_sock.so 00:05:14.733 SYMLINK libspdk_event_scheduler.so 00:05:14.733 SYMLINK libspdk_event_vhost_blk.so 00:05:14.733 SYMLINK libspdk_event_vmd.so 00:05:14.733 SYMLINK libspdk_event_iobuf.so 00:05:14.994 CC module/event/subsystems/accel/accel.o 00:05:15.254 LIB libspdk_event_accel.a 00:05:15.254 SO libspdk_event_accel.so.6.0 00:05:15.254 SYMLINK libspdk_event_accel.so 00:05:15.823 CC module/event/subsystems/bdev/bdev.o 00:05:15.823 LIB libspdk_event_bdev.a 00:05:15.823 SO libspdk_event_bdev.so.6.0 00:05:16.107 SYMLINK libspdk_event_bdev.so 00:05:16.366 CC module/event/subsystems/scsi/scsi.o 00:05:16.366 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:16.366 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:16.366 CC module/event/subsystems/ublk/ublk.o 00:05:16.366 CC module/event/subsystems/nbd/nbd.o 00:05:16.366 LIB libspdk_event_scsi.a 00:05:16.625 LIB libspdk_event_ublk.a 00:05:16.625 SO libspdk_event_scsi.so.6.0 00:05:16.625 LIB libspdk_event_nbd.a 00:05:16.625 SO libspdk_event_ublk.so.3.0 00:05:16.625 SO libspdk_event_nbd.so.6.0 00:05:16.625 SYMLINK libspdk_event_scsi.so 00:05:16.625 LIB libspdk_event_nvmf.a 00:05:16.625 SYMLINK libspdk_event_ublk.so 00:05:16.625 SO libspdk_event_nvmf.so.6.0 00:05:16.625 SYMLINK libspdk_event_nbd.so 00:05:16.625 SYMLINK libspdk_event_nvmf.so 00:05:16.883 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:16.883 CC module/event/subsystems/iscsi/iscsi.o 00:05:17.143 LIB libspdk_event_vhost_scsi.a 00:05:17.143 LIB libspdk_event_iscsi.a 00:05:17.143 SO libspdk_event_vhost_scsi.so.3.0 00:05:17.143 SO libspdk_event_iscsi.so.6.0 00:05:17.143 SYMLINK libspdk_event_vhost_scsi.so 00:05:17.143 SYMLINK libspdk_event_iscsi.so 00:05:17.478 SO libspdk.so.6.0 00:05:17.478 SYMLINK libspdk.so 00:05:17.805 CC app/trace_record/trace_record.o 00:05:17.805 CXX app/trace/trace.o 00:05:17.805 CC app/spdk_lspci/spdk_lspci.o 00:05:17.805 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:17.805 CC app/iscsi_tgt/iscsi_tgt.o 00:05:17.805 CC app/nvmf_tgt/nvmf_main.o 00:05:17.805 CC app/spdk_tgt/spdk_tgt.o 00:05:17.805 CC examples/util/zipf/zipf.o 00:05:17.805 CC examples/ioat/perf/perf.o 00:05:17.805 CC test/thread/poller_perf/poller_perf.o 00:05:17.805 LINK spdk_lspci 00:05:17.805 LINK interrupt_tgt 00:05:17.805 LINK nvmf_tgt 00:05:17.805 LINK zipf 00:05:17.805 LINK iscsi_tgt 00:05:18.063 LINK spdk_trace_record 00:05:18.063 LINK poller_perf 00:05:18.063 LINK spdk_tgt 00:05:18.063 LINK ioat_perf 00:05:18.063 CC app/spdk_nvme_perf/perf.o 00:05:18.063 LINK spdk_trace 00:05:18.063 CC app/spdk_nvme_identify/identify.o 00:05:18.063 CC app/spdk_nvme_discover/discovery_aer.o 00:05:18.063 CC app/spdk_top/spdk_top.o 00:05:18.063 TEST_HEADER include/spdk/accel.h 00:05:18.063 TEST_HEADER include/spdk/accel_module.h 00:05:18.063 TEST_HEADER include/spdk/assert.h 00:05:18.063 TEST_HEADER include/spdk/barrier.h 00:05:18.063 TEST_HEADER include/spdk/base64.h 00:05:18.321 TEST_HEADER include/spdk/bdev.h 00:05:18.321 TEST_HEADER include/spdk/bdev_module.h 00:05:18.321 TEST_HEADER include/spdk/bdev_zone.h 00:05:18.321 TEST_HEADER include/spdk/bit_array.h 00:05:18.321 TEST_HEADER include/spdk/bit_pool.h 00:05:18.321 CC examples/ioat/verify/verify.o 00:05:18.321 TEST_HEADER include/spdk/blob_bdev.h 00:05:18.321 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:18.321 TEST_HEADER include/spdk/blobfs.h 00:05:18.321 TEST_HEADER include/spdk/blob.h 00:05:18.321 TEST_HEADER include/spdk/conf.h 00:05:18.321 TEST_HEADER include/spdk/config.h 00:05:18.321 TEST_HEADER include/spdk/cpuset.h 00:05:18.321 TEST_HEADER include/spdk/crc16.h 00:05:18.321 TEST_HEADER include/spdk/crc32.h 00:05:18.321 TEST_HEADER include/spdk/crc64.h 00:05:18.321 TEST_HEADER include/spdk/dif.h 00:05:18.321 TEST_HEADER include/spdk/dma.h 00:05:18.321 TEST_HEADER include/spdk/endian.h 00:05:18.321 TEST_HEADER include/spdk/env_dpdk.h 00:05:18.321 TEST_HEADER include/spdk/env.h 00:05:18.321 TEST_HEADER include/spdk/event.h 00:05:18.321 TEST_HEADER include/spdk/fd_group.h 00:05:18.321 TEST_HEADER include/spdk/fd.h 00:05:18.321 TEST_HEADER include/spdk/file.h 00:05:18.321 TEST_HEADER include/spdk/fsdev.h 00:05:18.321 TEST_HEADER include/spdk/fsdev_module.h 00:05:18.321 TEST_HEADER include/spdk/ftl.h 00:05:18.321 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:18.321 TEST_HEADER include/spdk/gpt_spec.h 00:05:18.321 TEST_HEADER include/spdk/hexlify.h 00:05:18.321 TEST_HEADER include/spdk/histogram_data.h 00:05:18.321 TEST_HEADER include/spdk/idxd.h 00:05:18.321 TEST_HEADER include/spdk/idxd_spec.h 00:05:18.321 TEST_HEADER include/spdk/init.h 00:05:18.321 TEST_HEADER include/spdk/ioat.h 00:05:18.321 TEST_HEADER include/spdk/ioat_spec.h 00:05:18.321 TEST_HEADER include/spdk/iscsi_spec.h 00:05:18.321 TEST_HEADER include/spdk/json.h 00:05:18.321 TEST_HEADER include/spdk/jsonrpc.h 00:05:18.321 TEST_HEADER include/spdk/keyring.h 00:05:18.321 TEST_HEADER include/spdk/keyring_module.h 00:05:18.321 TEST_HEADER include/spdk/likely.h 00:05:18.321 CC app/spdk_dd/spdk_dd.o 00:05:18.321 TEST_HEADER include/spdk/log.h 00:05:18.321 TEST_HEADER include/spdk/lvol.h 00:05:18.321 CC test/app/bdev_svc/bdev_svc.o 00:05:18.321 CC test/dma/test_dma/test_dma.o 00:05:18.322 TEST_HEADER include/spdk/md5.h 00:05:18.322 TEST_HEADER include/spdk/memory.h 00:05:18.322 TEST_HEADER include/spdk/mmio.h 00:05:18.322 TEST_HEADER include/spdk/nbd.h 00:05:18.322 TEST_HEADER include/spdk/net.h 00:05:18.322 TEST_HEADER include/spdk/notify.h 00:05:18.322 TEST_HEADER include/spdk/nvme.h 00:05:18.322 TEST_HEADER include/spdk/nvme_intel.h 00:05:18.322 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:18.322 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:18.322 TEST_HEADER include/spdk/nvme_spec.h 00:05:18.322 TEST_HEADER include/spdk/nvme_zns.h 00:05:18.322 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:18.322 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:18.322 TEST_HEADER include/spdk/nvmf.h 00:05:18.322 TEST_HEADER include/spdk/nvmf_spec.h 00:05:18.322 TEST_HEADER include/spdk/nvmf_transport.h 00:05:18.322 TEST_HEADER include/spdk/opal.h 00:05:18.322 TEST_HEADER include/spdk/opal_spec.h 00:05:18.322 TEST_HEADER include/spdk/pci_ids.h 00:05:18.322 TEST_HEADER include/spdk/pipe.h 00:05:18.322 TEST_HEADER include/spdk/queue.h 00:05:18.322 TEST_HEADER include/spdk/reduce.h 00:05:18.322 TEST_HEADER include/spdk/rpc.h 00:05:18.322 TEST_HEADER include/spdk/scheduler.h 00:05:18.322 TEST_HEADER include/spdk/scsi.h 00:05:18.322 TEST_HEADER include/spdk/scsi_spec.h 00:05:18.322 TEST_HEADER include/spdk/sock.h 00:05:18.322 TEST_HEADER include/spdk/stdinc.h 00:05:18.322 TEST_HEADER include/spdk/string.h 00:05:18.322 LINK spdk_nvme_discover 00:05:18.322 TEST_HEADER include/spdk/thread.h 00:05:18.322 TEST_HEADER include/spdk/trace.h 00:05:18.322 TEST_HEADER include/spdk/trace_parser.h 00:05:18.322 TEST_HEADER include/spdk/tree.h 00:05:18.322 TEST_HEADER include/spdk/ublk.h 00:05:18.322 TEST_HEADER include/spdk/util.h 00:05:18.322 TEST_HEADER include/spdk/uuid.h 00:05:18.322 TEST_HEADER include/spdk/version.h 00:05:18.322 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:18.322 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:18.322 TEST_HEADER include/spdk/vhost.h 00:05:18.322 TEST_HEADER include/spdk/vmd.h 00:05:18.322 TEST_HEADER include/spdk/xor.h 00:05:18.322 TEST_HEADER include/spdk/zipf.h 00:05:18.322 CXX test/cpp_headers/accel.o 00:05:18.322 CC app/fio/nvme/fio_plugin.o 00:05:18.322 LINK verify 00:05:18.322 LINK bdev_svc 00:05:18.580 CXX test/cpp_headers/accel_module.o 00:05:18.580 CC app/vhost/vhost.o 00:05:18.580 LINK spdk_dd 00:05:18.580 CXX test/cpp_headers/assert.o 00:05:18.838 CC examples/thread/thread/thread_ex.o 00:05:18.838 LINK test_dma 00:05:18.838 LINK vhost 00:05:18.838 CXX test/cpp_headers/barrier.o 00:05:18.838 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:18.838 CXX test/cpp_headers/base64.o 00:05:18.838 LINK spdk_nvme_perf 00:05:18.838 LINK spdk_nvme 00:05:18.838 LINK spdk_nvme_identify 00:05:18.838 LINK thread 00:05:19.097 CC test/app/histogram_perf/histogram_perf.o 00:05:19.097 CC test/env/mem_callbacks/mem_callbacks.o 00:05:19.097 CC test/env/vtophys/vtophys.o 00:05:19.097 LINK spdk_top 00:05:19.097 CXX test/cpp_headers/bdev.o 00:05:19.097 CC app/fio/bdev/fio_plugin.o 00:05:19.097 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:19.097 LINK histogram_perf 00:05:19.097 LINK vtophys 00:05:19.097 LINK nvme_fuzz 00:05:19.097 CXX test/cpp_headers/bdev_module.o 00:05:19.097 CXX test/cpp_headers/bdev_zone.o 00:05:19.355 CC test/event/event_perf/event_perf.o 00:05:19.355 CC examples/sock/hello_world/hello_sock.o 00:05:19.355 CXX test/cpp_headers/bit_array.o 00:05:19.355 CC test/event/reactor/reactor.o 00:05:19.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:19.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:19.355 LINK event_perf 00:05:19.355 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:19.355 LINK reactor 00:05:19.356 CXX test/cpp_headers/bit_pool.o 00:05:19.356 LINK mem_callbacks 00:05:19.615 LINK hello_sock 00:05:19.615 CC test/env/memory/memory_ut.o 00:05:19.615 LINK spdk_bdev 00:05:19.615 LINK env_dpdk_post_init 00:05:19.615 CC test/env/pci/pci_ut.o 00:05:19.615 CXX test/cpp_headers/blob_bdev.o 00:05:19.615 CC test/event/reactor_perf/reactor_perf.o 00:05:19.615 CC test/event/app_repeat/app_repeat.o 00:05:19.615 LINK vhost_fuzz 00:05:19.874 CXX test/cpp_headers/blobfs_bdev.o 00:05:19.874 LINK reactor_perf 00:05:19.874 CC examples/vmd/lsvmd/lsvmd.o 00:05:19.874 CC examples/vmd/led/led.o 00:05:19.874 LINK app_repeat 00:05:19.874 CC test/nvme/aer/aer.o 00:05:19.874 CXX test/cpp_headers/blobfs.o 00:05:19.874 CC test/event/scheduler/scheduler.o 00:05:19.874 LINK lsvmd 00:05:19.874 LINK pci_ut 00:05:19.874 LINK led 00:05:20.133 CXX test/cpp_headers/blob.o 00:05:20.133 CC examples/idxd/perf/perf.o 00:05:20.133 CXX test/cpp_headers/conf.o 00:05:20.133 LINK aer 00:05:20.133 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:20.133 LINK scheduler 00:05:20.133 CC test/nvme/reset/reset.o 00:05:20.393 CC test/nvme/sgl/sgl.o 00:05:20.393 CXX test/cpp_headers/config.o 00:05:20.393 CXX test/cpp_headers/cpuset.o 00:05:20.393 CC test/nvme/e2edp/nvme_dp.o 00:05:20.393 CC test/app/jsoncat/jsoncat.o 00:05:20.393 LINK idxd_perf 00:05:20.393 CXX test/cpp_headers/crc16.o 00:05:20.393 LINK hello_fsdev 00:05:20.393 LINK reset 00:05:20.652 LINK sgl 00:05:20.652 LINK jsoncat 00:05:20.652 CXX test/cpp_headers/crc32.o 00:05:20.652 LINK nvme_dp 00:05:20.652 CXX test/cpp_headers/crc64.o 00:05:20.652 CC test/nvme/overhead/overhead.o 00:05:20.652 CXX test/cpp_headers/dif.o 00:05:20.652 CC examples/accel/perf/accel_perf.o 00:05:20.652 LINK memory_ut 00:05:20.652 CXX test/cpp_headers/dma.o 00:05:20.911 LINK iscsi_fuzz 00:05:20.911 CC test/nvme/err_injection/err_injection.o 00:05:20.911 CC examples/nvme/hello_world/hello_world.o 00:05:20.911 CC examples/blob/hello_world/hello_blob.o 00:05:20.911 CC examples/blob/cli/blobcli.o 00:05:20.911 CC test/nvme/startup/startup.o 00:05:20.911 CXX test/cpp_headers/endian.o 00:05:20.911 LINK overhead 00:05:20.911 LINK err_injection 00:05:20.911 CC test/rpc_client/rpc_client_test.o 00:05:20.911 LINK startup 00:05:21.169 LINK hello_blob 00:05:21.169 CXX test/cpp_headers/env_dpdk.o 00:05:21.169 LINK hello_world 00:05:21.170 CC test/app/stub/stub.o 00:05:21.170 CXX test/cpp_headers/env.o 00:05:21.170 LINK accel_perf 00:05:21.170 CXX test/cpp_headers/event.o 00:05:21.170 LINK rpc_client_test 00:05:21.170 CXX test/cpp_headers/fd_group.o 00:05:21.170 CC test/nvme/reserve/reserve.o 00:05:21.170 CXX test/cpp_headers/fd.o 00:05:21.170 CXX test/cpp_headers/file.o 00:05:21.170 CC examples/nvme/reconnect/reconnect.o 00:05:21.170 LINK stub 00:05:21.170 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:21.170 CXX test/cpp_headers/fsdev.o 00:05:21.435 CXX test/cpp_headers/fsdev_module.o 00:05:21.435 LINK blobcli 00:05:21.435 LINK reserve 00:05:21.435 CC examples/nvme/arbitration/arbitration.o 00:05:21.435 CC examples/nvme/hotplug/hotplug.o 00:05:21.435 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:21.435 CXX test/cpp_headers/ftl.o 00:05:21.435 CC examples/nvme/abort/abort.o 00:05:21.435 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:21.695 LINK reconnect 00:05:21.695 CC test/nvme/simple_copy/simple_copy.o 00:05:21.695 LINK cmb_copy 00:05:21.695 CXX test/cpp_headers/fuse_dispatcher.o 00:05:21.695 LINK hotplug 00:05:21.695 LINK pmr_persistence 00:05:21.695 CC examples/bdev/hello_world/hello_bdev.o 00:05:21.695 LINK arbitration 00:05:21.695 CXX test/cpp_headers/gpt_spec.o 00:05:21.695 LINK nvme_manage 00:05:21.953 CXX test/cpp_headers/hexlify.o 00:05:21.953 LINK abort 00:05:21.953 LINK simple_copy 00:05:21.953 CC examples/bdev/bdevperf/bdevperf.o 00:05:21.953 LINK hello_bdev 00:05:21.953 CC test/nvme/connect_stress/connect_stress.o 00:05:21.953 CXX test/cpp_headers/histogram_data.o 00:05:21.953 CC test/accel/dif/dif.o 00:05:21.953 CXX test/cpp_headers/idxd.o 00:05:21.953 CXX test/cpp_headers/idxd_spec.o 00:05:21.953 CC test/nvme/boot_partition/boot_partition.o 00:05:22.212 LINK connect_stress 00:05:22.212 CC test/blobfs/mkfs/mkfs.o 00:05:22.212 CXX test/cpp_headers/init.o 00:05:22.212 CC test/lvol/esnap/esnap.o 00:05:22.212 CC test/nvme/compliance/nvme_compliance.o 00:05:22.212 CC test/nvme/fused_ordering/fused_ordering.o 00:05:22.212 LINK boot_partition 00:05:22.212 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:22.212 CC test/nvme/fdp/fdp.o 00:05:22.212 LINK mkfs 00:05:22.212 CXX test/cpp_headers/ioat.o 00:05:22.471 LINK fused_ordering 00:05:22.471 LINK doorbell_aers 00:05:22.471 CXX test/cpp_headers/ioat_spec.o 00:05:22.471 CC test/nvme/cuse/cuse.o 00:05:22.471 CXX test/cpp_headers/iscsi_spec.o 00:05:22.471 LINK nvme_compliance 00:05:22.471 CXX test/cpp_headers/json.o 00:05:22.471 CXX test/cpp_headers/jsonrpc.o 00:05:22.471 CXX test/cpp_headers/keyring.o 00:05:22.471 CXX test/cpp_headers/keyring_module.o 00:05:22.729 LINK fdp 00:05:22.730 CXX test/cpp_headers/likely.o 00:05:22.730 CXX test/cpp_headers/log.o 00:05:22.730 LINK dif 00:05:22.730 CXX test/cpp_headers/lvol.o 00:05:22.730 CXX test/cpp_headers/md5.o 00:05:22.730 CXX test/cpp_headers/memory.o 00:05:22.730 CXX test/cpp_headers/mmio.o 00:05:22.730 LINK bdevperf 00:05:22.730 CXX test/cpp_headers/nbd.o 00:05:22.730 CXX test/cpp_headers/net.o 00:05:22.730 CXX test/cpp_headers/notify.o 00:05:22.730 CXX test/cpp_headers/nvme.o 00:05:22.730 CXX test/cpp_headers/nvme_intel.o 00:05:22.989 CXX test/cpp_headers/nvme_ocssd.o 00:05:22.989 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:22.989 CXX test/cpp_headers/nvme_spec.o 00:05:22.989 CXX test/cpp_headers/nvme_zns.o 00:05:22.989 CXX test/cpp_headers/nvmf_cmd.o 00:05:22.989 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:22.989 CC test/bdev/bdevio/bdevio.o 00:05:22.989 CXX test/cpp_headers/nvmf.o 00:05:22.989 CXX test/cpp_headers/nvmf_spec.o 00:05:22.989 CXX test/cpp_headers/nvmf_transport.o 00:05:23.249 CC examples/nvmf/nvmf/nvmf.o 00:05:23.249 CXX test/cpp_headers/opal.o 00:05:23.249 CXX test/cpp_headers/opal_spec.o 00:05:23.249 CXX test/cpp_headers/pci_ids.o 00:05:23.249 CXX test/cpp_headers/pipe.o 00:05:23.249 CXX test/cpp_headers/queue.o 00:05:23.249 CXX test/cpp_headers/reduce.o 00:05:23.249 CXX test/cpp_headers/rpc.o 00:05:23.249 CXX test/cpp_headers/scheduler.o 00:05:23.249 CXX test/cpp_headers/scsi.o 00:05:23.249 CXX test/cpp_headers/scsi_spec.o 00:05:23.249 CXX test/cpp_headers/sock.o 00:05:23.508 CXX test/cpp_headers/stdinc.o 00:05:23.508 LINK bdevio 00:05:23.508 CXX test/cpp_headers/string.o 00:05:23.508 CXX test/cpp_headers/thread.o 00:05:23.508 LINK nvmf 00:05:23.508 CXX test/cpp_headers/trace.o 00:05:23.508 CXX test/cpp_headers/trace_parser.o 00:05:23.508 CXX test/cpp_headers/tree.o 00:05:23.508 CXX test/cpp_headers/ublk.o 00:05:23.508 CXX test/cpp_headers/util.o 00:05:23.508 CXX test/cpp_headers/uuid.o 00:05:23.508 CXX test/cpp_headers/version.o 00:05:23.508 CXX test/cpp_headers/vfio_user_pci.o 00:05:23.508 CXX test/cpp_headers/vfio_user_spec.o 00:05:23.768 LINK cuse 00:05:23.768 CXX test/cpp_headers/vhost.o 00:05:23.768 CXX test/cpp_headers/vmd.o 00:05:23.768 CXX test/cpp_headers/xor.o 00:05:23.768 CXX test/cpp_headers/zipf.o 00:05:27.976 LINK esnap 00:05:27.976 00:05:27.976 real 1m17.295s 00:05:27.976 user 5m36.153s 00:05:27.976 sys 1m8.406s 00:05:27.976 18:45:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:27.976 18:45:57 make -- common/autotest_common.sh@10 -- $ set +x 00:05:27.976 ************************************ 00:05:27.976 END TEST make 00:05:27.976 ************************************ 00:05:28.236 18:45:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:28.236 18:45:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:28.236 18:45:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:28.236 18:45:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:28.236 18:45:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:28.236 18:45:57 -- pm/common@44 -- $ pid=6205 00:05:28.236 18:45:57 -- pm/common@50 -- $ kill -TERM 6205 00:05:28.236 18:45:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:28.236 18:45:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:28.236 18:45:57 -- pm/common@44 -- $ pid=6207 00:05:28.236 18:45:57 -- pm/common@50 -- $ kill -TERM 6207 00:05:28.236 18:45:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:28.236 18:45:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:28.236 18:45:57 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.236 18:45:57 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.236 18:45:57 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.236 18:45:57 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.236 18:45:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.236 18:45:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.236 18:45:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.236 18:45:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.236 18:45:57 -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.236 18:45:57 -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.236 18:45:57 -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.236 18:45:57 -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.236 18:45:57 -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.236 18:45:57 -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.236 18:45:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.236 18:45:57 -- scripts/common.sh@344 -- # case "$op" in 00:05:28.236 18:45:57 -- scripts/common.sh@345 -- # : 1 00:05:28.236 18:45:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.236 18:45:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.236 18:45:57 -- scripts/common.sh@365 -- # decimal 1 00:05:28.236 18:45:57 -- scripts/common.sh@353 -- # local d=1 00:05:28.236 18:45:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.236 18:45:57 -- scripts/common.sh@355 -- # echo 1 00:05:28.236 18:45:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.236 18:45:57 -- scripts/common.sh@366 -- # decimal 2 00:05:28.236 18:45:57 -- scripts/common.sh@353 -- # local d=2 00:05:28.236 18:45:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.236 18:45:57 -- scripts/common.sh@355 -- # echo 2 00:05:28.236 18:45:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.236 18:45:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.236 18:45:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.236 18:45:57 -- scripts/common.sh@368 -- # return 0 00:05:28.236 18:45:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.236 18:45:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.236 --rc genhtml_branch_coverage=1 00:05:28.236 --rc genhtml_function_coverage=1 00:05:28.236 --rc genhtml_legend=1 00:05:28.236 --rc geninfo_all_blocks=1 00:05:28.236 --rc geninfo_unexecuted_blocks=1 00:05:28.236 00:05:28.236 ' 00:05:28.236 18:45:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.236 --rc genhtml_branch_coverage=1 00:05:28.236 --rc genhtml_function_coverage=1 00:05:28.236 --rc genhtml_legend=1 00:05:28.236 --rc geninfo_all_blocks=1 00:05:28.236 --rc geninfo_unexecuted_blocks=1 00:05:28.236 00:05:28.236 ' 00:05:28.236 18:45:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.236 --rc genhtml_branch_coverage=1 00:05:28.236 --rc genhtml_function_coverage=1 00:05:28.236 --rc genhtml_legend=1 00:05:28.236 --rc geninfo_all_blocks=1 00:05:28.236 --rc geninfo_unexecuted_blocks=1 00:05:28.236 00:05:28.236 ' 00:05:28.236 18:45:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.236 --rc genhtml_branch_coverage=1 00:05:28.236 --rc genhtml_function_coverage=1 00:05:28.236 --rc genhtml_legend=1 00:05:28.236 --rc geninfo_all_blocks=1 00:05:28.236 --rc geninfo_unexecuted_blocks=1 00:05:28.236 00:05:28.236 ' 00:05:28.237 18:45:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.237 18:45:57 -- nvmf/common.sh@7 -- # uname -s 00:05:28.237 18:45:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.237 18:45:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.237 18:45:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.237 18:45:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.237 18:45:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.237 18:45:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.237 18:45:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.237 18:45:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.237 18:45:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.237 18:45:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.497 18:45:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:05:28.497 18:45:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:05:28.497 18:45:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.497 18:45:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.497 18:45:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.497 18:45:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.497 18:45:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.497 18:45:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.497 18:45:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.497 18:45:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.497 18:45:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.497 18:45:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.497 18:45:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.497 18:45:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.497 18:45:57 -- paths/export.sh@5 -- # export PATH 00:05:28.497 18:45:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.497 18:45:57 -- nvmf/common.sh@51 -- # : 0 00:05:28.497 18:45:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.497 18:45:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.497 18:45:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.497 18:45:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.497 18:45:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.497 18:45:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.497 18:45:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.497 18:45:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.497 18:45:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.497 18:45:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:28.497 18:45:57 -- spdk/autotest.sh@32 -- # uname -s 00:05:28.497 18:45:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:28.497 18:45:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:28.497 18:45:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:28.497 18:45:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:28.497 18:45:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:28.497 18:45:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:28.497 18:45:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:28.497 18:45:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:28.497 18:45:57 -- spdk/autotest.sh@48 -- # udevadm_pid=68314 00:05:28.497 18:45:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:28.497 18:45:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:28.497 18:45:57 -- pm/common@17 -- # local monitor 00:05:28.497 18:45:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:28.497 18:45:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:28.497 18:45:57 -- pm/common@21 -- # date +%s 00:05:28.497 18:45:57 -- pm/common@25 -- # sleep 1 00:05:28.497 18:45:57 -- pm/common@21 -- # date +%s 00:05:28.497 18:45:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732819557 00:05:28.497 18:45:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732819557 00:05:28.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732819557_collect-vmstat.pm.log 00:05:28.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732819557_collect-cpu-load.pm.log 00:05:29.485 18:45:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:29.485 18:45:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:29.485 18:45:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.485 18:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.485 18:45:58 -- spdk/autotest.sh@59 -- # create_test_list 00:05:29.485 18:45:58 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:29.485 18:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.485 18:45:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:29.485 18:45:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:29.485 18:45:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:29.485 18:45:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:29.485 18:45:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:29.485 18:45:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:29.485 18:45:59 -- common/autotest_common.sh@1457 -- # uname 00:05:29.485 18:45:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:29.485 18:45:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:29.485 18:45:59 -- common/autotest_common.sh@1477 -- # uname 00:05:29.485 18:45:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:29.485 18:45:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:29.485 18:45:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:29.768 lcov: LCOV version 1.15 00:05:29.768 18:45:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:44.694 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:44.694 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:59.660 18:46:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:59.660 18:46:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.660 18:46:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.660 18:46:28 -- spdk/autotest.sh@78 -- # rm -f 00:05:59.660 18:46:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.660 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:59.660 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:59.660 18:46:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:59.660 18:46:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:59.660 18:46:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:59.660 18:46:28 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:59.660 18:46:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:59.660 18:46:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:59.660 18:46:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:59.660 18:46:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:59.660 18:46:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:59.660 18:46:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:59.660 18:46:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:59.660 18:46:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:59.660 18:46:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:59.660 18:46:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:59.660 18:46:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:59.660 18:46:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:59.660 18:46:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:59.660 18:46:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:59.660 18:46:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:59.660 18:46:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:59.660 18:46:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:59.660 18:46:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:59.660 18:46:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:59.660 18:46:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:59.660 No valid GPT data, bailing 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # pt= 00:05:59.660 18:46:29 -- scripts/common.sh@395 -- # return 1 00:05:59.660 18:46:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:59.660 1+0 records in 00:05:59.660 1+0 records out 00:05:59.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666049 s, 157 MB/s 00:05:59.660 18:46:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:59.660 18:46:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:59.660 18:46:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:59.660 18:46:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:59.660 18:46:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:59.660 No valid GPT data, bailing 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # pt= 00:05:59.660 18:46:29 -- scripts/common.sh@395 -- # return 1 00:05:59.660 18:46:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:59.660 1+0 records in 00:05:59.660 1+0 records out 00:05:59.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342434 s, 306 MB/s 00:05:59.660 18:46:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:59.660 18:46:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:59.660 18:46:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:59.660 18:46:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:59.660 18:46:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:59.660 No valid GPT data, bailing 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:59.660 18:46:29 -- scripts/common.sh@394 -- # pt= 00:05:59.660 18:46:29 -- scripts/common.sh@395 -- # return 1 00:05:59.660 18:46:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:59.660 1+0 records in 00:05:59.660 1+0 records out 00:05:59.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463721 s, 226 MB/s 00:05:59.660 18:46:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:59.660 18:46:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:59.660 18:46:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:59.660 18:46:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:59.660 18:46:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:59.920 No valid GPT data, bailing 00:05:59.920 18:46:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:59.920 18:46:29 -- scripts/common.sh@394 -- # pt= 00:05:59.920 18:46:29 -- scripts/common.sh@395 -- # return 1 00:05:59.920 18:46:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:59.920 1+0 records in 00:05:59.920 1+0 records out 00:05:59.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579594 s, 181 MB/s 00:05:59.920 18:46:29 -- spdk/autotest.sh@105 -- # sync 00:05:59.920 18:46:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:59.920 18:46:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:59.920 18:46:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:03.213 18:46:32 -- spdk/autotest.sh@111 -- # uname -s 00:06:03.213 18:46:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:03.213 18:46:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:03.213 18:46:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:03.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.472 Hugepages 00:06:03.472 node hugesize free / total 00:06:03.472 node0 1048576kB 0 / 0 00:06:03.472 node0 2048kB 0 / 0 00:06:03.472 00:06:03.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:03.730 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:03.730 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:03.989 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:03.989 18:46:33 -- spdk/autotest.sh@117 -- # uname -s 00:06:03.989 18:46:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:03.989 18:46:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:03.989 18:46:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.927 18:46:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:05.865 18:46:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:05.865 18:46:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:05.865 18:46:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:05.865 18:46:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:05.865 18:46:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.865 18:46:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.865 18:46:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.865 18:46:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.865 18:46:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:06.123 18:46:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:06.123 18:46:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:06.123 18:46:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.643 Waiting for block devices as requested 00:06:06.643 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.643 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.903 18:46:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:06.903 18:46:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:06.903 18:46:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:06.903 18:46:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:06.903 18:46:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:06.903 18:46:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:06.903 18:46:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:06.903 18:46:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:06.903 18:46:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1543 -- # continue 00:06:06.903 18:46:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:06.903 18:46:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:06.903 18:46:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:06.903 18:46:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:06.903 18:46:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:06.904 18:46:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:06.904 18:46:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:06.904 18:46:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:06.904 18:46:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:06.904 18:46:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:06.904 18:46:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:06.904 18:46:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:06.904 18:46:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:06.904 18:46:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:06.904 18:46:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:06.904 18:46:36 -- common/autotest_common.sh@1543 -- # continue 00:06:06.904 18:46:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:06.904 18:46:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.904 18:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 18:46:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:06.904 18:46:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.904 18:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 18:46:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.868 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.868 18:46:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:07.868 18:46:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.868 18:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.128 18:46:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:08.128 18:46:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:08.128 18:46:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:08.128 18:46:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:08.128 18:46:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:08.128 18:46:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:08.128 18:46:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:08.128 18:46:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:08.128 18:46:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:08.128 18:46:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:08.128 18:46:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.128 18:46:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.128 18:46:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:08.128 18:46:37 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:08.128 18:46:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.128 18:46:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.128 18:46:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:08.128 18:46:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.128 18:46:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.128 18:46:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.128 18:46:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:08.128 18:46:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.128 18:46:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.128 18:46:37 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:08.128 18:46:37 -- common/autotest_common.sh@1572 -- # return 0 00:06:08.128 18:46:37 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:08.128 18:46:37 -- common/autotest_common.sh@1580 -- # return 0 00:06:08.128 18:46:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:08.128 18:46:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:08.128 18:46:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.128 18:46:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.128 18:46:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:08.128 18:46:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.128 18:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.128 18:46:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:08.128 18:46:37 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.128 18:46:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.128 18:46:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.128 18:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.128 ************************************ 00:06:08.128 START TEST env 00:06:08.128 ************************************ 00:06:08.128 18:46:37 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.128 * Looking for test storage... 00:06:08.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.388 18:46:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.388 18:46:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.388 18:46:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.388 18:46:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.388 18:46:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.388 18:46:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.388 18:46:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.388 18:46:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.388 18:46:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.388 18:46:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.388 18:46:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.388 18:46:37 env -- scripts/common.sh@344 -- # case "$op" in 00:06:08.388 18:46:37 env -- scripts/common.sh@345 -- # : 1 00:06:08.388 18:46:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.388 18:46:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.388 18:46:37 env -- scripts/common.sh@365 -- # decimal 1 00:06:08.388 18:46:37 env -- scripts/common.sh@353 -- # local d=1 00:06:08.388 18:46:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.388 18:46:37 env -- scripts/common.sh@355 -- # echo 1 00:06:08.388 18:46:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.388 18:46:37 env -- scripts/common.sh@366 -- # decimal 2 00:06:08.388 18:46:37 env -- scripts/common.sh@353 -- # local d=2 00:06:08.388 18:46:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.388 18:46:37 env -- scripts/common.sh@355 -- # echo 2 00:06:08.388 18:46:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.388 18:46:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.388 18:46:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.388 18:46:37 env -- scripts/common.sh@368 -- # return 0 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.388 --rc genhtml_branch_coverage=1 00:06:08.388 --rc genhtml_function_coverage=1 00:06:08.388 --rc genhtml_legend=1 00:06:08.388 --rc geninfo_all_blocks=1 00:06:08.388 --rc geninfo_unexecuted_blocks=1 00:06:08.388 00:06:08.388 ' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.388 --rc genhtml_branch_coverage=1 00:06:08.388 --rc genhtml_function_coverage=1 00:06:08.388 --rc genhtml_legend=1 00:06:08.388 --rc geninfo_all_blocks=1 00:06:08.388 --rc geninfo_unexecuted_blocks=1 00:06:08.388 00:06:08.388 ' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.388 --rc genhtml_branch_coverage=1 00:06:08.388 --rc genhtml_function_coverage=1 00:06:08.388 --rc genhtml_legend=1 00:06:08.388 --rc geninfo_all_blocks=1 00:06:08.388 --rc geninfo_unexecuted_blocks=1 00:06:08.388 00:06:08.388 ' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.388 --rc genhtml_branch_coverage=1 00:06:08.388 --rc genhtml_function_coverage=1 00:06:08.388 --rc genhtml_legend=1 00:06:08.388 --rc geninfo_all_blocks=1 00:06:08.388 --rc geninfo_unexecuted_blocks=1 00:06:08.388 00:06:08.388 ' 00:06:08.388 18:46:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.388 18:46:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.388 18:46:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.388 ************************************ 00:06:08.388 START TEST env_memory 00:06:08.388 ************************************ 00:06:08.388 18:46:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.388 00:06:08.388 00:06:08.388 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.388 http://cunit.sourceforge.net/ 00:06:08.388 00:06:08.388 00:06:08.388 Suite: memory 00:06:08.389 Test: alloc and free memory map ...[2024-11-28 18:46:37.914651] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:08.389 passed 00:06:08.389 Test: mem map translation ...[2024-11-28 18:46:37.954866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:08.389 [2024-11-28 18:46:37.954955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:08.389 [2024-11-28 18:46:37.955054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:08.389 [2024-11-28 18:46:37.955102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:08.648 passed 00:06:08.648 Test: mem map registration ...[2024-11-28 18:46:38.017372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:08.648 [2024-11-28 18:46:38.017458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:08.648 passed 00:06:08.648 Test: mem map adjacent registrations ...passed 00:06:08.648 00:06:08.648 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.648 suites 1 1 n/a 0 0 00:06:08.648 tests 4 4 4 0 0 00:06:08.648 asserts 152 152 152 0 n/a 00:06:08.649 00:06:08.649 Elapsed time = 0.227 seconds 00:06:08.649 00:06:08.649 real 0m0.283s 00:06:08.649 user 0m0.236s 00:06:08.649 sys 0m0.037s 00:06:08.649 18:46:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.649 ************************************ 00:06:08.649 END TEST env_memory 00:06:08.649 ************************************ 00:06:08.649 18:46:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:08.649 18:46:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:08.649 18:46:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.649 18:46:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.649 18:46:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.649 ************************************ 00:06:08.649 START TEST env_vtophys 00:06:08.649 ************************************ 00:06:08.649 18:46:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:08.649 EAL: lib.eal log level changed from notice to debug 00:06:08.649 EAL: Detected lcore 0 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 1 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 2 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 3 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 4 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 5 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 6 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 7 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 8 as core 0 on socket 0 00:06:08.649 EAL: Detected lcore 9 as core 0 on socket 0 00:06:08.649 EAL: Maximum logical cores by configuration: 128 00:06:08.649 EAL: Detected CPU lcores: 10 00:06:08.649 EAL: Detected NUMA nodes: 1 00:06:08.649 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:06:08.649 EAL: Detected shared linkage of DPDK 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:06:08.649 EAL: Registered [vdev] bus. 00:06:08.649 EAL: bus.vdev log level changed from disabled to notice 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:06:08.649 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:08.649 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:06:08.649 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:06:08.908 EAL: No shared files mode enabled, IPC will be disabled 00:06:08.908 EAL: No shared files mode enabled, IPC is disabled 00:06:08.908 EAL: Selected IOVA mode 'PA' 00:06:08.908 EAL: Probing VFIO support... 00:06:08.908 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:08.908 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:08.908 EAL: Ask a virtual area of 0x2e000 bytes 00:06:08.908 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:08.908 EAL: Setting up physically contiguous memory... 00:06:08.908 EAL: Setting maximum number of open files to 524288 00:06:08.908 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:08.908 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:08.908 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.908 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:08.908 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.908 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.908 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:08.908 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:08.908 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.908 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:08.908 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.908 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.908 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:08.909 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:08.909 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.909 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:08.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.909 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.909 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:08.909 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:08.909 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.909 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:08.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.909 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.909 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:08.909 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:08.909 EAL: Hugepages will be freed exactly as allocated. 00:06:08.909 EAL: No shared files mode enabled, IPC is disabled 00:06:08.909 EAL: No shared files mode enabled, IPC is disabled 00:06:08.909 EAL: TSC frequency is ~2294600 KHz 00:06:08.909 EAL: Main lcore 0 is ready (tid=7fac80620a40;cpuset=[0]) 00:06:08.909 EAL: Trying to obtain current memory policy. 00:06:08.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.909 EAL: Restoring previous memory policy: 0 00:06:08.909 EAL: request: mp_malloc_sync 00:06:08.909 EAL: No shared files mode enabled, IPC is disabled 00:06:08.909 EAL: Heap on socket 0 was expanded by 2MB 00:06:08.909 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:06:08.909 EAL: No shared files mode enabled, IPC is disabled 00:06:08.909 EAL: Mem event callback 'spdk:(nil)' registered 00:06:08.909 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:08.909 00:06:08.909 00:06:08.909 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.909 http://cunit.sourceforge.net/ 00:06:08.909 00:06:08.909 00:06:08.909 Suite: components_suite 00:06:09.168 Test: vtophys_malloc_test ...passed 00:06:09.168 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:09.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.168 EAL: Restoring previous memory policy: 4 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.168 EAL: request: mp_malloc_sync 00:06:09.168 EAL: No shared files mode enabled, IPC is disabled 00:06:09.168 EAL: Heap on socket 0 was expanded by 4MB 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.168 EAL: request: mp_malloc_sync 00:06:09.168 EAL: No shared files mode enabled, IPC is disabled 00:06:09.168 EAL: Heap on socket 0 was shrunk by 4MB 00:06:09.168 EAL: Trying to obtain current memory policy. 00:06:09.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.168 EAL: Restoring previous memory policy: 4 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.168 EAL: request: mp_malloc_sync 00:06:09.168 EAL: No shared files mode enabled, IPC is disabled 00:06:09.168 EAL: Heap on socket 0 was expanded by 6MB 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.168 EAL: request: mp_malloc_sync 00:06:09.168 EAL: No shared files mode enabled, IPC is disabled 00:06:09.168 EAL: Heap on socket 0 was shrunk by 6MB 00:06:09.168 EAL: Trying to obtain current memory policy. 00:06:09.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.168 EAL: Restoring previous memory policy: 4 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.168 EAL: request: mp_malloc_sync 00:06:09.168 EAL: No shared files mode enabled, IPC is disabled 00:06:09.168 EAL: Heap on socket 0 was expanded by 10MB 00:06:09.168 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was shrunk by 10MB 00:06:09.169 EAL: Trying to obtain current memory policy. 00:06:09.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.169 EAL: Restoring previous memory policy: 4 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was expanded by 18MB 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was shrunk by 18MB 00:06:09.169 EAL: Trying to obtain current memory policy. 00:06:09.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.169 EAL: Restoring previous memory policy: 4 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was expanded by 34MB 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was shrunk by 34MB 00:06:09.169 EAL: Trying to obtain current memory policy. 00:06:09.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.169 EAL: Restoring previous memory policy: 4 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.169 EAL: request: mp_malloc_sync 00:06:09.169 EAL: No shared files mode enabled, IPC is disabled 00:06:09.169 EAL: Heap on socket 0 was expanded by 66MB 00:06:09.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.428 EAL: request: mp_malloc_sync 00:06:09.428 EAL: No shared files mode enabled, IPC is disabled 00:06:09.428 EAL: Heap on socket 0 was shrunk by 66MB 00:06:09.428 EAL: Trying to obtain current memory policy. 00:06:09.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.428 EAL: Restoring previous memory policy: 4 00:06:09.428 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.428 EAL: request: mp_malloc_sync 00:06:09.428 EAL: No shared files mode enabled, IPC is disabled 00:06:09.429 EAL: Heap on socket 0 was expanded by 130MB 00:06:09.429 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.429 EAL: request: mp_malloc_sync 00:06:09.429 EAL: No shared files mode enabled, IPC is disabled 00:06:09.429 EAL: Heap on socket 0 was shrunk by 130MB 00:06:09.429 EAL: Trying to obtain current memory policy. 00:06:09.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.429 EAL: Restoring previous memory policy: 4 00:06:09.429 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.429 EAL: request: mp_malloc_sync 00:06:09.429 EAL: No shared files mode enabled, IPC is disabled 00:06:09.429 EAL: Heap on socket 0 was expanded by 258MB 00:06:09.429 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.429 EAL: request: mp_malloc_sync 00:06:09.429 EAL: No shared files mode enabled, IPC is disabled 00:06:09.429 EAL: Heap on socket 0 was shrunk by 258MB 00:06:09.429 EAL: Trying to obtain current memory policy. 00:06:09.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.688 EAL: Restoring previous memory policy: 4 00:06:09.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.688 EAL: request: mp_malloc_sync 00:06:09.688 EAL: No shared files mode enabled, IPC is disabled 00:06:09.688 EAL: Heap on socket 0 was expanded by 514MB 00:06:09.688 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.688 EAL: request: mp_malloc_sync 00:06:09.688 EAL: No shared files mode enabled, IPC is disabled 00:06:09.688 EAL: Heap on socket 0 was shrunk by 514MB 00:06:09.688 EAL: Trying to obtain current memory policy. 00:06:09.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.948 EAL: Restoring previous memory policy: 4 00:06:09.948 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.948 EAL: request: mp_malloc_sync 00:06:09.948 EAL: No shared files mode enabled, IPC is disabled 00:06:09.948 EAL: Heap on socket 0 was expanded by 1026MB 00:06:10.207 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.207 passed 00:06:10.207 00:06:10.207 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.207 suites 1 1 n/a 0 0 00:06:10.207 tests 2 2 2 0 0 00:06:10.207 asserts 5470 5470 5470 0 n/a 00:06:10.207 00:06:10.207 Elapsed time = 1.316 seconds 00:06:10.207 EAL: request: mp_malloc_sync 00:06:10.207 EAL: No shared files mode enabled, IPC is disabled 00:06:10.207 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:10.207 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.207 EAL: request: mp_malloc_sync 00:06:10.207 EAL: No shared files mode enabled, IPC is disabled 00:06:10.207 EAL: Heap on socket 0 was shrunk by 2MB 00:06:10.207 EAL: No shared files mode enabled, IPC is disabled 00:06:10.207 EAL: No shared files mode enabled, IPC is disabled 00:06:10.207 EAL: No shared files mode enabled, IPC is disabled 00:06:10.207 00:06:10.207 real 0m1.600s 00:06:10.207 user 0m0.770s 00:06:10.207 sys 0m0.692s 00:06:10.207 18:46:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.207 18:46:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:10.207 ************************************ 00:06:10.207 END TEST env_vtophys 00:06:10.207 ************************************ 00:06:10.468 18:46:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:10.468 18:46:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.468 18:46:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.468 18:46:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.468 ************************************ 00:06:10.468 START TEST env_pci 00:06:10.468 ************************************ 00:06:10.468 18:46:39 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:10.468 00:06:10.468 00:06:10.468 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.468 http://cunit.sourceforge.net/ 00:06:10.468 00:06:10.468 00:06:10.468 Suite: pci 00:06:10.468 Test: pci_hook ...[2024-11-28 18:46:39.890944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70554 has claimed it 00:06:10.468 passed 00:06:10.468 00:06:10.468 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.468 suites 1 1 n/a 0 0 00:06:10.468 tests 1 1 1 0 0 00:06:10.468 asserts 25 25 25 0 n/a 00:06:10.468 00:06:10.468 Elapsed time = 0.005 seconds 00:06:10.468 EAL: Cannot find device (10000:00:01.0) 00:06:10.468 EAL: Failed to attach device on primary process 00:06:10.468 00:06:10.468 real 0m0.109s 00:06:10.468 user 0m0.041s 00:06:10.468 sys 0m0.065s 00:06:10.468 18:46:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.468 18:46:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:10.468 ************************************ 00:06:10.468 END TEST env_pci 00:06:10.468 ************************************ 00:06:10.468 18:46:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:10.468 18:46:40 env -- env/env.sh@15 -- # uname 00:06:10.468 18:46:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:10.468 18:46:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:10.468 18:46:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.468 18:46:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:10.468 18:46:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.468 18:46:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.468 ************************************ 00:06:10.468 START TEST env_dpdk_post_init 00:06:10.468 ************************************ 00:06:10.468 18:46:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.727 EAL: Detected CPU lcores: 10 00:06:10.727 EAL: Detected NUMA nodes: 1 00:06:10.727 EAL: Detected shared linkage of DPDK 00:06:10.727 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:10.727 EAL: Selected IOVA mode 'PA' 00:06:10.728 Starting DPDK initialization... 00:06:10.728 Starting SPDK post initialization... 00:06:10.728 SPDK NVMe probe 00:06:10.728 Attaching to 0000:00:10.0 00:06:10.728 Attaching to 0000:00:11.0 00:06:10.728 Attached to 0000:00:10.0 00:06:10.728 Attached to 0000:00:11.0 00:06:10.728 Cleaning up... 00:06:10.728 ************************************ 00:06:10.728 END TEST env_dpdk_post_init 00:06:10.728 ************************************ 00:06:10.728 00:06:10.728 real 0m0.270s 00:06:10.728 user 0m0.085s 00:06:10.728 sys 0m0.085s 00:06:10.728 18:46:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.728 18:46:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.987 18:46:40 env -- env/env.sh@26 -- # uname 00:06:10.987 18:46:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:10.987 18:46:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:10.987 18:46:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.987 18:46:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.987 18:46:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.987 ************************************ 00:06:10.987 START TEST env_mem_callbacks 00:06:10.987 ************************************ 00:06:10.987 18:46:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:10.987 EAL: Detected CPU lcores: 10 00:06:10.987 EAL: Detected NUMA nodes: 1 00:06:10.987 EAL: Detected shared linkage of DPDK 00:06:10.987 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:10.987 EAL: Selected IOVA mode 'PA' 00:06:10.987 00:06:10.987 00:06:10.987 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.987 http://cunit.sourceforge.net/ 00:06:10.987 00:06:10.987 00:06:10.987 Suite: memory 00:06:10.987 Test: test ... 00:06:10.987 register 0x200000200000 2097152 00:06:10.987 malloc 3145728 00:06:10.987 register 0x200000400000 4194304 00:06:10.987 buf 0x200000500000 len 3145728 PASSED 00:06:10.987 malloc 64 00:06:10.987 buf 0x2000004fff40 len 64 PASSED 00:06:10.987 malloc 4194304 00:06:10.988 register 0x200000800000 6291456 00:06:10.988 buf 0x200000a00000 len 4194304 PASSED 00:06:10.988 free 0x200000500000 3145728 00:06:10.988 free 0x2000004fff40 64 00:06:10.988 unregister 0x200000400000 4194304 PASSED 00:06:10.988 free 0x200000a00000 4194304 00:06:10.988 unregister 0x200000800000 6291456 PASSED 00:06:10.988 malloc 8388608 00:06:10.988 register 0x200000400000 10485760 00:06:10.988 buf 0x200000600000 len 8388608 PASSED 00:06:10.988 free 0x200000600000 8388608 00:06:10.988 unregister 0x200000400000 10485760 PASSED 00:06:10.988 passed 00:06:10.988 00:06:10.988 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.988 suites 1 1 n/a 0 0 00:06:10.988 tests 1 1 1 0 0 00:06:10.988 asserts 15 15 15 0 n/a 00:06:10.988 00:06:10.988 Elapsed time = 0.011 seconds 00:06:10.988 00:06:10.988 real 0m0.212s 00:06:10.988 user 0m0.040s 00:06:10.988 sys 0m0.069s 00:06:10.988 18:46:40 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.988 18:46:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:10.988 ************************************ 00:06:10.988 END TEST env_mem_callbacks 00:06:10.988 ************************************ 00:06:11.246 00:06:11.246 real 0m3.036s 00:06:11.246 user 0m1.395s 00:06:11.246 sys 0m1.305s 00:06:11.246 18:46:40 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.246 18:46:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.246 ************************************ 00:06:11.246 END TEST env 00:06:11.246 ************************************ 00:06:11.246 18:46:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:11.246 18:46:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.246 18:46:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.246 18:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.246 ************************************ 00:06:11.247 START TEST rpc 00:06:11.247 ************************************ 00:06:11.247 18:46:40 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:11.247 * Looking for test storage... 00:06:11.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:11.247 18:46:40 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.247 18:46:40 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.247 18:46:40 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.505 18:46:40 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.505 18:46:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.505 18:46:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.505 18:46:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.505 18:46:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.505 18:46:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.505 18:46:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.505 18:46:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.505 18:46:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.505 18:46:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.505 18:46:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.505 18:46:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.505 18:46:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:11.505 18:46:40 rpc -- scripts/common.sh@345 -- # : 1 00:06:11.505 18:46:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.505 18:46:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.505 18:46:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:11.505 18:46:40 rpc -- scripts/common.sh@353 -- # local d=1 00:06:11.506 18:46:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.506 18:46:40 rpc -- scripts/common.sh@355 -- # echo 1 00:06:11.506 18:46:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.506 18:46:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:11.506 18:46:40 rpc -- scripts/common.sh@353 -- # local d=2 00:06:11.506 18:46:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.506 18:46:40 rpc -- scripts/common.sh@355 -- # echo 2 00:06:11.506 18:46:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.506 18:46:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.506 18:46:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.506 18:46:40 rpc -- scripts/common.sh@368 -- # return 0 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.506 --rc genhtml_branch_coverage=1 00:06:11.506 --rc genhtml_function_coverage=1 00:06:11.506 --rc genhtml_legend=1 00:06:11.506 --rc geninfo_all_blocks=1 00:06:11.506 --rc geninfo_unexecuted_blocks=1 00:06:11.506 00:06:11.506 ' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.506 --rc genhtml_branch_coverage=1 00:06:11.506 --rc genhtml_function_coverage=1 00:06:11.506 --rc genhtml_legend=1 00:06:11.506 --rc geninfo_all_blocks=1 00:06:11.506 --rc geninfo_unexecuted_blocks=1 00:06:11.506 00:06:11.506 ' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.506 --rc genhtml_branch_coverage=1 00:06:11.506 --rc genhtml_function_coverage=1 00:06:11.506 --rc genhtml_legend=1 00:06:11.506 --rc geninfo_all_blocks=1 00:06:11.506 --rc geninfo_unexecuted_blocks=1 00:06:11.506 00:06:11.506 ' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.506 --rc genhtml_branch_coverage=1 00:06:11.506 --rc genhtml_function_coverage=1 00:06:11.506 --rc genhtml_legend=1 00:06:11.506 --rc geninfo_all_blocks=1 00:06:11.506 --rc geninfo_unexecuted_blocks=1 00:06:11.506 00:06:11.506 ' 00:06:11.506 18:46:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70681 00:06:11.506 18:46:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:11.506 18:46:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.506 18:46:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70681 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 70681 ']' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.506 18:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.506 [2024-11-28 18:46:41.019583] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:11.506 [2024-11-28 18:46:41.019737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70681 ] 00:06:11.765 [2024-11-28 18:46:41.156087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.765 [2024-11-28 18:46:41.194736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.765 [2024-11-28 18:46:41.221110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:11.765 [2024-11-28 18:46:41.221171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70681' to capture a snapshot of events at runtime. 00:06:11.766 [2024-11-28 18:46:41.221182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.766 [2024-11-28 18:46:41.221192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.766 [2024-11-28 18:46:41.221201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70681 for offline analysis/debug. 00:06:11.766 [2024-11-28 18:46:41.221609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.333 18:46:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.333 18:46:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.333 18:46:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.333 18:46:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.333 18:46:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:12.333 18:46:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:12.333 18:46:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.333 18:46:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.333 18:46:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 ************************************ 00:06:12.333 START TEST rpc_integrity 00:06:12.333 ************************************ 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:12.333 { 00:06:12.333 "name": "Malloc0", 00:06:12.333 "aliases": [ 00:06:12.333 "41e4df22-12aa-4842-bf11-df85a43a5b79" 00:06:12.333 ], 00:06:12.333 "product_name": "Malloc disk", 00:06:12.333 "block_size": 512, 00:06:12.333 "num_blocks": 16384, 00:06:12.333 "uuid": "41e4df22-12aa-4842-bf11-df85a43a5b79", 00:06:12.333 "assigned_rate_limits": { 00:06:12.333 "rw_ios_per_sec": 0, 00:06:12.333 "rw_mbytes_per_sec": 0, 00:06:12.333 "r_mbytes_per_sec": 0, 00:06:12.333 "w_mbytes_per_sec": 0 00:06:12.333 }, 00:06:12.333 "claimed": false, 00:06:12.333 "zoned": false, 00:06:12.333 "supported_io_types": { 00:06:12.333 "read": true, 00:06:12.333 "write": true, 00:06:12.333 "unmap": true, 00:06:12.333 "flush": true, 00:06:12.333 "reset": true, 00:06:12.333 "nvme_admin": false, 00:06:12.333 "nvme_io": false, 00:06:12.333 "nvme_io_md": false, 00:06:12.333 "write_zeroes": true, 00:06:12.333 "zcopy": true, 00:06:12.333 "get_zone_info": false, 00:06:12.333 "zone_management": false, 00:06:12.333 "zone_append": false, 00:06:12.333 "compare": false, 00:06:12.333 "compare_and_write": false, 00:06:12.333 "abort": true, 00:06:12.333 "seek_hole": false, 00:06:12.333 "seek_data": false, 00:06:12.333 "copy": true, 00:06:12.333 "nvme_iov_md": false 00:06:12.333 }, 00:06:12.333 "memory_domains": [ 00:06:12.333 { 00:06:12.333 "dma_device_id": "system", 00:06:12.333 "dma_device_type": 1 00:06:12.333 }, 00:06:12.333 { 00:06:12.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.333 "dma_device_type": 2 00:06:12.333 } 00:06:12.333 ], 00:06:12.333 "driver_specific": {} 00:06:12.333 } 00:06:12.333 ]' 00:06:12.333 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:12.592 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:12.592 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 [2024-11-28 18:46:41.967497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:12.592 [2024-11-28 18:46:41.967567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:12.592 [2024-11-28 18:46:41.967595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:12.592 [2024-11-28 18:46:41.967608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:12.592 [2024-11-28 18:46:41.969918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:12.592 [2024-11-28 18:46:41.969954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:12.592 Passthru0 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.592 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 18:46:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.592 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:12.592 { 00:06:12.592 "name": "Malloc0", 00:06:12.592 "aliases": [ 00:06:12.592 "41e4df22-12aa-4842-bf11-df85a43a5b79" 00:06:12.592 ], 00:06:12.592 "product_name": "Malloc disk", 00:06:12.592 "block_size": 512, 00:06:12.592 "num_blocks": 16384, 00:06:12.592 "uuid": "41e4df22-12aa-4842-bf11-df85a43a5b79", 00:06:12.592 "assigned_rate_limits": { 00:06:12.592 "rw_ios_per_sec": 0, 00:06:12.592 "rw_mbytes_per_sec": 0, 00:06:12.592 "r_mbytes_per_sec": 0, 00:06:12.593 "w_mbytes_per_sec": 0 00:06:12.593 }, 00:06:12.593 "claimed": true, 00:06:12.593 "claim_type": "exclusive_write", 00:06:12.593 "zoned": false, 00:06:12.593 "supported_io_types": { 00:06:12.593 "read": true, 00:06:12.593 "write": true, 00:06:12.593 "unmap": true, 00:06:12.593 "flush": true, 00:06:12.593 "reset": true, 00:06:12.593 "nvme_admin": false, 00:06:12.593 "nvme_io": false, 00:06:12.593 "nvme_io_md": false, 00:06:12.593 "write_zeroes": true, 00:06:12.593 "zcopy": true, 00:06:12.593 "get_zone_info": false, 00:06:12.593 "zone_management": false, 00:06:12.593 "zone_append": false, 00:06:12.593 "compare": false, 00:06:12.593 "compare_and_write": false, 00:06:12.593 "abort": true, 00:06:12.593 "seek_hole": false, 00:06:12.593 "seek_data": false, 00:06:12.593 "copy": true, 00:06:12.593 "nvme_iov_md": false 00:06:12.593 }, 00:06:12.593 "memory_domains": [ 00:06:12.593 { 00:06:12.593 "dma_device_id": "system", 00:06:12.593 "dma_device_type": 1 00:06:12.593 }, 00:06:12.593 { 00:06:12.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.593 "dma_device_type": 2 00:06:12.593 } 00:06:12.593 ], 00:06:12.593 "driver_specific": {} 00:06:12.593 }, 00:06:12.593 { 00:06:12.593 "name": "Passthru0", 00:06:12.593 "aliases": [ 00:06:12.593 "943b54e0-191c-50d9-8f7d-abb61b71646a" 00:06:12.593 ], 00:06:12.593 "product_name": "passthru", 00:06:12.593 "block_size": 512, 00:06:12.593 "num_blocks": 16384, 00:06:12.593 "uuid": "943b54e0-191c-50d9-8f7d-abb61b71646a", 00:06:12.593 "assigned_rate_limits": { 00:06:12.593 "rw_ios_per_sec": 0, 00:06:12.593 "rw_mbytes_per_sec": 0, 00:06:12.593 "r_mbytes_per_sec": 0, 00:06:12.593 "w_mbytes_per_sec": 0 00:06:12.593 }, 00:06:12.593 "claimed": false, 00:06:12.593 "zoned": false, 00:06:12.593 "supported_io_types": { 00:06:12.593 "read": true, 00:06:12.593 "write": true, 00:06:12.593 "unmap": true, 00:06:12.593 "flush": true, 00:06:12.593 "reset": true, 00:06:12.593 "nvme_admin": false, 00:06:12.593 "nvme_io": false, 00:06:12.593 "nvme_io_md": false, 00:06:12.593 "write_zeroes": true, 00:06:12.593 "zcopy": true, 00:06:12.593 "get_zone_info": false, 00:06:12.593 "zone_management": false, 00:06:12.593 "zone_append": false, 00:06:12.593 "compare": false, 00:06:12.593 "compare_and_write": false, 00:06:12.593 "abort": true, 00:06:12.593 "seek_hole": false, 00:06:12.593 "seek_data": false, 00:06:12.593 "copy": true, 00:06:12.593 "nvme_iov_md": false 00:06:12.593 }, 00:06:12.593 "memory_domains": [ 00:06:12.593 { 00:06:12.593 "dma_device_id": "system", 00:06:12.593 "dma_device_type": 1 00:06:12.593 }, 00:06:12.593 { 00:06:12.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.593 "dma_device_type": 2 00:06:12.593 } 00:06:12.593 ], 00:06:12.593 "driver_specific": { 00:06:12.593 "passthru": { 00:06:12.593 "name": "Passthru0", 00:06:12.593 "base_bdev_name": "Malloc0" 00:06:12.593 } 00:06:12.593 } 00:06:12.593 } 00:06:12.593 ]' 00:06:12.593 18:46:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:12.593 18:46:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:12.593 00:06:12.593 real 0m0.280s 00:06:12.593 user 0m0.168s 00:06:12.593 sys 0m0.042s 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 ************************************ 00:06:12.593 END TEST rpc_integrity 00:06:12.593 ************************************ 00:06:12.593 18:46:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:12.593 18:46:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.593 18:46:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.593 18:46:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 ************************************ 00:06:12.593 START TEST rpc_plugins 00:06:12.593 ************************************ 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:12.593 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.593 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:12.593 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.593 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:12.852 { 00:06:12.852 "name": "Malloc1", 00:06:12.852 "aliases": [ 00:06:12.852 "f40d887f-58f6-420c-a04f-18fe71bc6af2" 00:06:12.852 ], 00:06:12.852 "product_name": "Malloc disk", 00:06:12.852 "block_size": 4096, 00:06:12.852 "num_blocks": 256, 00:06:12.852 "uuid": "f40d887f-58f6-420c-a04f-18fe71bc6af2", 00:06:12.852 "assigned_rate_limits": { 00:06:12.852 "rw_ios_per_sec": 0, 00:06:12.852 "rw_mbytes_per_sec": 0, 00:06:12.852 "r_mbytes_per_sec": 0, 00:06:12.852 "w_mbytes_per_sec": 0 00:06:12.852 }, 00:06:12.852 "claimed": false, 00:06:12.852 "zoned": false, 00:06:12.852 "supported_io_types": { 00:06:12.852 "read": true, 00:06:12.852 "write": true, 00:06:12.852 "unmap": true, 00:06:12.852 "flush": true, 00:06:12.852 "reset": true, 00:06:12.852 "nvme_admin": false, 00:06:12.852 "nvme_io": false, 00:06:12.852 "nvme_io_md": false, 00:06:12.852 "write_zeroes": true, 00:06:12.852 "zcopy": true, 00:06:12.852 "get_zone_info": false, 00:06:12.852 "zone_management": false, 00:06:12.852 "zone_append": false, 00:06:12.852 "compare": false, 00:06:12.852 "compare_and_write": false, 00:06:12.852 "abort": true, 00:06:12.852 "seek_hole": false, 00:06:12.852 "seek_data": false, 00:06:12.852 "copy": true, 00:06:12.852 "nvme_iov_md": false 00:06:12.852 }, 00:06:12.852 "memory_domains": [ 00:06:12.852 { 00:06:12.852 "dma_device_id": "system", 00:06:12.852 "dma_device_type": 1 00:06:12.852 }, 00:06:12.852 { 00:06:12.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.852 "dma_device_type": 2 00:06:12.852 } 00:06:12.852 ], 00:06:12.852 "driver_specific": {} 00:06:12.852 } 00:06:12.852 ]' 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:12.852 18:46:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:12.852 00:06:12.852 real 0m0.165s 00:06:12.852 user 0m0.097s 00:06:12.852 sys 0m0.026s 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.852 18:46:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.852 ************************************ 00:06:12.852 END TEST rpc_plugins 00:06:12.852 ************************************ 00:06:12.852 18:46:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:12.852 18:46:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.852 18:46:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.852 18:46:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.852 ************************************ 00:06:12.852 START TEST rpc_trace_cmd_test 00:06:12.852 ************************************ 00:06:12.852 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:12.852 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:12.852 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:12.852 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.853 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.853 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.853 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:12.853 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70681", 00:06:12.853 "tpoint_group_mask": "0x8", 00:06:12.853 "iscsi_conn": { 00:06:12.853 "mask": "0x2", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "scsi": { 00:06:12.853 "mask": "0x4", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "bdev": { 00:06:12.853 "mask": "0x8", 00:06:12.853 "tpoint_mask": "0xffffffffffffffff" 00:06:12.853 }, 00:06:12.853 "nvmf_rdma": { 00:06:12.853 "mask": "0x10", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "nvmf_tcp": { 00:06:12.853 "mask": "0x20", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "ftl": { 00:06:12.853 "mask": "0x40", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "blobfs": { 00:06:12.853 "mask": "0x80", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "dsa": { 00:06:12.853 "mask": "0x200", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "thread": { 00:06:12.853 "mask": "0x400", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "nvme_pcie": { 00:06:12.853 "mask": "0x800", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "iaa": { 00:06:12.853 "mask": "0x1000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "nvme_tcp": { 00:06:12.853 "mask": "0x2000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "bdev_nvme": { 00:06:12.853 "mask": "0x4000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "sock": { 00:06:12.853 "mask": "0x8000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "blob": { 00:06:12.853 "mask": "0x10000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "bdev_raid": { 00:06:12.853 "mask": "0x20000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 }, 00:06:12.853 "scheduler": { 00:06:12.853 "mask": "0x40000", 00:06:12.853 "tpoint_mask": "0x0" 00:06:12.853 } 00:06:12.853 }' 00:06:12.853 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:13.111 00:06:13.111 real 0m0.245s 00:06:13.111 user 0m0.205s 00:06:13.111 sys 0m0.033s 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.111 18:46:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.111 ************************************ 00:06:13.111 END TEST rpc_trace_cmd_test 00:06:13.111 ************************************ 00:06:13.111 18:46:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:13.111 18:46:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:13.111 18:46:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:13.111 18:46:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.111 18:46:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.111 18:46:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.111 ************************************ 00:06:13.111 START TEST rpc_daemon_integrity 00:06:13.111 ************************************ 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.111 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:13.370 { 00:06:13.370 "name": "Malloc2", 00:06:13.370 "aliases": [ 00:06:13.370 "ad2b22ae-ec80-47a8-bb18-a242eabb969a" 00:06:13.370 ], 00:06:13.370 "product_name": "Malloc disk", 00:06:13.370 "block_size": 512, 00:06:13.370 "num_blocks": 16384, 00:06:13.370 "uuid": "ad2b22ae-ec80-47a8-bb18-a242eabb969a", 00:06:13.370 "assigned_rate_limits": { 00:06:13.370 "rw_ios_per_sec": 0, 00:06:13.370 "rw_mbytes_per_sec": 0, 00:06:13.370 "r_mbytes_per_sec": 0, 00:06:13.370 "w_mbytes_per_sec": 0 00:06:13.370 }, 00:06:13.370 "claimed": false, 00:06:13.370 "zoned": false, 00:06:13.370 "supported_io_types": { 00:06:13.370 "read": true, 00:06:13.370 "write": true, 00:06:13.370 "unmap": true, 00:06:13.370 "flush": true, 00:06:13.370 "reset": true, 00:06:13.370 "nvme_admin": false, 00:06:13.370 "nvme_io": false, 00:06:13.370 "nvme_io_md": false, 00:06:13.370 "write_zeroes": true, 00:06:13.370 "zcopy": true, 00:06:13.370 "get_zone_info": false, 00:06:13.370 "zone_management": false, 00:06:13.370 "zone_append": false, 00:06:13.370 "compare": false, 00:06:13.370 "compare_and_write": false, 00:06:13.370 "abort": true, 00:06:13.370 "seek_hole": false, 00:06:13.370 "seek_data": false, 00:06:13.370 "copy": true, 00:06:13.370 "nvme_iov_md": false 00:06:13.370 }, 00:06:13.370 "memory_domains": [ 00:06:13.370 { 00:06:13.370 "dma_device_id": "system", 00:06:13.370 "dma_device_type": 1 00:06:13.370 }, 00:06:13.370 { 00:06:13.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.370 "dma_device_type": 2 00:06:13.370 } 00:06:13.370 ], 00:06:13.370 "driver_specific": {} 00:06:13.370 } 00:06:13.370 ]' 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.370 [2024-11-28 18:46:42.848119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:13.370 [2024-11-28 18:46:42.848174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.370 [2024-11-28 18:46:42.848197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:13.370 [2024-11-28 18:46:42.848208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.370 [2024-11-28 18:46:42.850415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.370 [2024-11-28 18:46:42.850464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:13.370 Passthru0 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.370 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:13.371 { 00:06:13.371 "name": "Malloc2", 00:06:13.371 "aliases": [ 00:06:13.371 "ad2b22ae-ec80-47a8-bb18-a242eabb969a" 00:06:13.371 ], 00:06:13.371 "product_name": "Malloc disk", 00:06:13.371 "block_size": 512, 00:06:13.371 "num_blocks": 16384, 00:06:13.371 "uuid": "ad2b22ae-ec80-47a8-bb18-a242eabb969a", 00:06:13.371 "assigned_rate_limits": { 00:06:13.371 "rw_ios_per_sec": 0, 00:06:13.371 "rw_mbytes_per_sec": 0, 00:06:13.371 "r_mbytes_per_sec": 0, 00:06:13.371 "w_mbytes_per_sec": 0 00:06:13.371 }, 00:06:13.371 "claimed": true, 00:06:13.371 "claim_type": "exclusive_write", 00:06:13.371 "zoned": false, 00:06:13.371 "supported_io_types": { 00:06:13.371 "read": true, 00:06:13.371 "write": true, 00:06:13.371 "unmap": true, 00:06:13.371 "flush": true, 00:06:13.371 "reset": true, 00:06:13.371 "nvme_admin": false, 00:06:13.371 "nvme_io": false, 00:06:13.371 "nvme_io_md": false, 00:06:13.371 "write_zeroes": true, 00:06:13.371 "zcopy": true, 00:06:13.371 "get_zone_info": false, 00:06:13.371 "zone_management": false, 00:06:13.371 "zone_append": false, 00:06:13.371 "compare": false, 00:06:13.371 "compare_and_write": false, 00:06:13.371 "abort": true, 00:06:13.371 "seek_hole": false, 00:06:13.371 "seek_data": false, 00:06:13.371 "copy": true, 00:06:13.371 "nvme_iov_md": false 00:06:13.371 }, 00:06:13.371 "memory_domains": [ 00:06:13.371 { 00:06:13.371 "dma_device_id": "system", 00:06:13.371 "dma_device_type": 1 00:06:13.371 }, 00:06:13.371 { 00:06:13.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.371 "dma_device_type": 2 00:06:13.371 } 00:06:13.371 ], 00:06:13.371 "driver_specific": {} 00:06:13.371 }, 00:06:13.371 { 00:06:13.371 "name": "Passthru0", 00:06:13.371 "aliases": [ 00:06:13.371 "52d89c17-fa5e-54d7-8ca3-b90f6b3c6a22" 00:06:13.371 ], 00:06:13.371 "product_name": "passthru", 00:06:13.371 "block_size": 512, 00:06:13.371 "num_blocks": 16384, 00:06:13.371 "uuid": "52d89c17-fa5e-54d7-8ca3-b90f6b3c6a22", 00:06:13.371 "assigned_rate_limits": { 00:06:13.371 "rw_ios_per_sec": 0, 00:06:13.371 "rw_mbytes_per_sec": 0, 00:06:13.371 "r_mbytes_per_sec": 0, 00:06:13.371 "w_mbytes_per_sec": 0 00:06:13.371 }, 00:06:13.371 "claimed": false, 00:06:13.371 "zoned": false, 00:06:13.371 "supported_io_types": { 00:06:13.371 "read": true, 00:06:13.371 "write": true, 00:06:13.371 "unmap": true, 00:06:13.371 "flush": true, 00:06:13.371 "reset": true, 00:06:13.371 "nvme_admin": false, 00:06:13.371 "nvme_io": false, 00:06:13.371 "nvme_io_md": false, 00:06:13.371 "write_zeroes": true, 00:06:13.371 "zcopy": true, 00:06:13.371 "get_zone_info": false, 00:06:13.371 "zone_management": false, 00:06:13.371 "zone_append": false, 00:06:13.371 "compare": false, 00:06:13.371 "compare_and_write": false, 00:06:13.371 "abort": true, 00:06:13.371 "seek_hole": false, 00:06:13.371 "seek_data": false, 00:06:13.371 "copy": true, 00:06:13.371 "nvme_iov_md": false 00:06:13.371 }, 00:06:13.371 "memory_domains": [ 00:06:13.371 { 00:06:13.371 "dma_device_id": "system", 00:06:13.371 "dma_device_type": 1 00:06:13.371 }, 00:06:13.371 { 00:06:13.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.371 "dma_device_type": 2 00:06:13.371 } 00:06:13.371 ], 00:06:13.371 "driver_specific": { 00:06:13.371 "passthru": { 00:06:13.371 "name": "Passthru0", 00:06:13.371 "base_bdev_name": "Malloc2" 00:06:13.371 } 00:06:13.371 } 00:06:13.371 } 00:06:13.371 ]' 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:13.371 18:46:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:13.630 18:46:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:13.630 00:06:13.630 real 0m0.318s 00:06:13.630 user 0m0.192s 00:06:13.630 sys 0m0.060s 00:06:13.630 18:46:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.630 18:46:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.630 ************************************ 00:06:13.630 END TEST rpc_daemon_integrity 00:06:13.630 ************************************ 00:06:13.630 18:46:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:13.630 18:46:43 rpc -- rpc/rpc.sh@84 -- # killprocess 70681 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@954 -- # '[' -z 70681 ']' 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@958 -- # kill -0 70681 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70681 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.630 killing process with pid 70681 00:06:13.630 18:46:43 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70681' 00:06:13.631 18:46:43 rpc -- common/autotest_common.sh@973 -- # kill 70681 00:06:13.631 18:46:43 rpc -- common/autotest_common.sh@978 -- # wait 70681 00:06:13.891 00:06:13.891 real 0m2.776s 00:06:13.891 user 0m3.302s 00:06:13.891 sys 0m0.870s 00:06:13.891 18:46:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.891 ************************************ 00:06:13.891 END TEST rpc 00:06:13.891 ************************************ 00:06:13.891 18:46:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.151 18:46:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:14.151 18:46:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.151 18:46:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.151 18:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:14.151 ************************************ 00:06:14.151 START TEST skip_rpc 00:06:14.151 ************************************ 00:06:14.151 18:46:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:14.151 * Looking for test storage... 00:06:14.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:14.151 18:46:43 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.151 18:46:43 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.151 18:46:43 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.151 18:46:43 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.151 18:46:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.410 18:46:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.410 --rc genhtml_branch_coverage=1 00:06:14.410 --rc genhtml_function_coverage=1 00:06:14.410 --rc genhtml_legend=1 00:06:14.410 --rc geninfo_all_blocks=1 00:06:14.410 --rc geninfo_unexecuted_blocks=1 00:06:14.410 00:06:14.410 ' 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.410 --rc genhtml_branch_coverage=1 00:06:14.410 --rc genhtml_function_coverage=1 00:06:14.410 --rc genhtml_legend=1 00:06:14.410 --rc geninfo_all_blocks=1 00:06:14.410 --rc geninfo_unexecuted_blocks=1 00:06:14.410 00:06:14.410 ' 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.410 --rc genhtml_branch_coverage=1 00:06:14.410 --rc genhtml_function_coverage=1 00:06:14.410 --rc genhtml_legend=1 00:06:14.410 --rc geninfo_all_blocks=1 00:06:14.410 --rc geninfo_unexecuted_blocks=1 00:06:14.410 00:06:14.410 ' 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.410 --rc genhtml_branch_coverage=1 00:06:14.410 --rc genhtml_function_coverage=1 00:06:14.410 --rc genhtml_legend=1 00:06:14.410 --rc geninfo_all_blocks=1 00:06:14.410 --rc geninfo_unexecuted_blocks=1 00:06:14.410 00:06:14.410 ' 00:06:14.410 18:46:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.410 18:46:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:14.410 18:46:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.410 18:46:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.411 ************************************ 00:06:14.411 START TEST skip_rpc 00:06:14.411 ************************************ 00:06:14.411 18:46:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:14.411 18:46:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70883 00:06:14.411 18:46:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.411 18:46:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:14.411 18:46:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:14.411 [2024-11-28 18:46:43.873807] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:14.411 [2024-11-28 18:46:43.873936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70883 ] 00:06:14.411 [2024-11-28 18:46:44.008040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.670 [2024-11-28 18:46:44.047929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.670 [2024-11-28 18:46:44.073606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70883 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70883 ']' 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70883 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70883 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.942 killing process with pid 70883 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70883' 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70883 00:06:19.942 18:46:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70883 00:06:19.942 00:06:19.942 real 0m5.422s 00:06:19.942 user 0m5.027s 00:06:19.942 sys 0m0.322s 00:06:19.942 18:46:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.942 18:46:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.942 ************************************ 00:06:19.942 END TEST skip_rpc 00:06:19.942 ************************************ 00:06:19.942 18:46:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:19.942 18:46:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.942 18:46:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.942 18:46:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.942 ************************************ 00:06:19.942 START TEST skip_rpc_with_json 00:06:19.942 ************************************ 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70970 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70970 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 70970 ']' 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.942 18:46:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.942 [2024-11-28 18:46:49.365032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:19.942 [2024-11-28 18:46:49.365156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70970 ] 00:06:19.942 [2024-11-28 18:46:49.499755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.942 [2024-11-28 18:46:49.538473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.201 [2024-11-28 18:46:49.563769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.778 [2024-11-28 18:46:50.171256] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:20.778 request: 00:06:20.778 { 00:06:20.778 "trtype": "tcp", 00:06:20.778 "method": "nvmf_get_transports", 00:06:20.778 "req_id": 1 00:06:20.778 } 00:06:20.778 Got JSON-RPC error response 00:06:20.778 response: 00:06:20.778 { 00:06:20.778 "code": -19, 00:06:20.778 "message": "No such device" 00:06:20.778 } 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.778 [2024-11-28 18:46:50.183359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.778 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:20.778 { 00:06:20.778 "subsystems": [ 00:06:20.778 { 00:06:20.778 "subsystem": "fsdev", 00:06:20.778 "config": [ 00:06:20.778 { 00:06:20.778 "method": "fsdev_set_opts", 00:06:20.778 "params": { 00:06:20.778 "fsdev_io_pool_size": 65535, 00:06:20.778 "fsdev_io_cache_size": 256 00:06:20.778 } 00:06:20.778 } 00:06:20.778 ] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "keyring", 00:06:20.778 "config": [] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "iobuf", 00:06:20.778 "config": [ 00:06:20.778 { 00:06:20.778 "method": "iobuf_set_options", 00:06:20.778 "params": { 00:06:20.778 "small_pool_count": 8192, 00:06:20.778 "large_pool_count": 1024, 00:06:20.778 "small_bufsize": 8192, 00:06:20.778 "large_bufsize": 135168, 00:06:20.778 "enable_numa": false 00:06:20.778 } 00:06:20.778 } 00:06:20.778 ] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "sock", 00:06:20.778 "config": [ 00:06:20.778 { 00:06:20.778 "method": "sock_set_default_impl", 00:06:20.778 "params": { 00:06:20.778 "impl_name": "posix" 00:06:20.778 } 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "method": "sock_impl_set_options", 00:06:20.778 "params": { 00:06:20.778 "impl_name": "ssl", 00:06:20.778 "recv_buf_size": 4096, 00:06:20.778 "send_buf_size": 4096, 00:06:20.778 "enable_recv_pipe": true, 00:06:20.778 "enable_quickack": false, 00:06:20.778 "enable_placement_id": 0, 00:06:20.778 "enable_zerocopy_send_server": true, 00:06:20.778 "enable_zerocopy_send_client": false, 00:06:20.778 "zerocopy_threshold": 0, 00:06:20.778 "tls_version": 0, 00:06:20.778 "enable_ktls": false 00:06:20.778 } 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "method": "sock_impl_set_options", 00:06:20.778 "params": { 00:06:20.778 "impl_name": "posix", 00:06:20.778 "recv_buf_size": 2097152, 00:06:20.778 "send_buf_size": 2097152, 00:06:20.778 "enable_recv_pipe": true, 00:06:20.778 "enable_quickack": false, 00:06:20.778 "enable_placement_id": 0, 00:06:20.778 "enable_zerocopy_send_server": true, 00:06:20.778 "enable_zerocopy_send_client": false, 00:06:20.778 "zerocopy_threshold": 0, 00:06:20.778 "tls_version": 0, 00:06:20.778 "enable_ktls": false 00:06:20.778 } 00:06:20.778 } 00:06:20.778 ] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "vmd", 00:06:20.778 "config": [] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "accel", 00:06:20.778 "config": [ 00:06:20.778 { 00:06:20.778 "method": "accel_set_options", 00:06:20.778 "params": { 00:06:20.778 "small_cache_size": 128, 00:06:20.778 "large_cache_size": 16, 00:06:20.778 "task_count": 2048, 00:06:20.778 "sequence_count": 2048, 00:06:20.778 "buf_count": 2048 00:06:20.778 } 00:06:20.778 } 00:06:20.778 ] 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "subsystem": "bdev", 00:06:20.778 "config": [ 00:06:20.778 { 00:06:20.778 "method": "bdev_set_options", 00:06:20.778 "params": { 00:06:20.778 "bdev_io_pool_size": 65535, 00:06:20.778 "bdev_io_cache_size": 256, 00:06:20.778 "bdev_auto_examine": true, 00:06:20.778 "iobuf_small_cache_size": 128, 00:06:20.778 "iobuf_large_cache_size": 16 00:06:20.778 } 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "method": "bdev_raid_set_options", 00:06:20.778 "params": { 00:06:20.778 "process_window_size_kb": 1024, 00:06:20.778 "process_max_bandwidth_mb_sec": 0 00:06:20.778 } 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "method": "bdev_iscsi_set_options", 00:06:20.778 "params": { 00:06:20.778 "timeout_sec": 30 00:06:20.778 } 00:06:20.778 }, 00:06:20.778 { 00:06:20.778 "method": "bdev_nvme_set_options", 00:06:20.778 "params": { 00:06:20.778 "action_on_timeout": "none", 00:06:20.778 "timeout_us": 0, 00:06:20.778 "timeout_admin_us": 0, 00:06:20.778 "keep_alive_timeout_ms": 10000, 00:06:20.778 "arbitration_burst": 0, 00:06:20.778 "low_priority_weight": 0, 00:06:20.778 "medium_priority_weight": 0, 00:06:20.778 "high_priority_weight": 0, 00:06:20.778 "nvme_adminq_poll_period_us": 10000, 00:06:20.778 "nvme_ioq_poll_period_us": 0, 00:06:20.778 "io_queue_requests": 0, 00:06:20.778 "delay_cmd_submit": true, 00:06:20.778 "transport_retry_count": 4, 00:06:20.778 "bdev_retry_count": 3, 00:06:20.778 "transport_ack_timeout": 0, 00:06:20.778 "ctrlr_loss_timeout_sec": 0, 00:06:20.778 "reconnect_delay_sec": 0, 00:06:20.778 "fast_io_fail_timeout_sec": 0, 00:06:20.778 "disable_auto_failback": false, 00:06:20.778 "generate_uuids": false, 00:06:20.778 "transport_tos": 0, 00:06:20.778 "nvme_error_stat": false, 00:06:20.778 "rdma_srq_size": 0, 00:06:20.778 "io_path_stat": false, 00:06:20.778 "allow_accel_sequence": false, 00:06:20.778 "rdma_max_cq_size": 0, 00:06:20.778 "rdma_cm_event_timeout_ms": 0, 00:06:20.778 "dhchap_digests": [ 00:06:20.778 "sha256", 00:06:20.778 "sha384", 00:06:20.778 "sha512" 00:06:20.778 ], 00:06:20.778 "dhchap_dhgroups": [ 00:06:20.778 "null", 00:06:20.778 "ffdhe2048", 00:06:20.778 "ffdhe3072", 00:06:20.778 "ffdhe4096", 00:06:20.779 "ffdhe6144", 00:06:20.779 "ffdhe8192" 00:06:20.779 ] 00:06:20.779 } 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "method": "bdev_nvme_set_hotplug", 00:06:20.779 "params": { 00:06:20.779 "period_us": 100000, 00:06:20.779 "enable": false 00:06:20.779 } 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "method": "bdev_wait_for_examine" 00:06:20.779 } 00:06:20.779 ] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "scsi", 00:06:20.779 "config": null 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "scheduler", 00:06:20.779 "config": [ 00:06:20.779 { 00:06:20.779 "method": "framework_set_scheduler", 00:06:20.779 "params": { 00:06:20.779 "name": "static" 00:06:20.779 } 00:06:20.779 } 00:06:20.779 ] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "vhost_scsi", 00:06:20.779 "config": [] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "vhost_blk", 00:06:20.779 "config": [] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "ublk", 00:06:20.779 "config": [] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "nbd", 00:06:20.779 "config": [] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "nvmf", 00:06:20.779 "config": [ 00:06:20.779 { 00:06:20.779 "method": "nvmf_set_config", 00:06:20.779 "params": { 00:06:20.779 "discovery_filter": "match_any", 00:06:20.779 "admin_cmd_passthru": { 00:06:20.779 "identify_ctrlr": false 00:06:20.779 }, 00:06:20.779 "dhchap_digests": [ 00:06:20.779 "sha256", 00:06:20.779 "sha384", 00:06:20.779 "sha512" 00:06:20.779 ], 00:06:20.779 "dhchap_dhgroups": [ 00:06:20.779 "null", 00:06:20.779 "ffdhe2048", 00:06:20.779 "ffdhe3072", 00:06:20.779 "ffdhe4096", 00:06:20.779 "ffdhe6144", 00:06:20.779 "ffdhe8192" 00:06:20.779 ] 00:06:20.779 } 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "method": "nvmf_set_max_subsystems", 00:06:20.779 "params": { 00:06:20.779 "max_subsystems": 1024 00:06:20.779 } 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "method": "nvmf_set_crdt", 00:06:20.779 "params": { 00:06:20.779 "crdt1": 0, 00:06:20.779 "crdt2": 0, 00:06:20.779 "crdt3": 0 00:06:20.779 } 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "method": "nvmf_create_transport", 00:06:20.779 "params": { 00:06:20.779 "trtype": "TCP", 00:06:20.779 "max_queue_depth": 128, 00:06:20.779 "max_io_qpairs_per_ctrlr": 127, 00:06:20.779 "in_capsule_data_size": 4096, 00:06:20.779 "max_io_size": 131072, 00:06:20.779 "io_unit_size": 131072, 00:06:20.779 "max_aq_depth": 128, 00:06:20.779 "num_shared_buffers": 511, 00:06:20.779 "buf_cache_size": 4294967295, 00:06:20.779 "dif_insert_or_strip": false, 00:06:20.779 "zcopy": false, 00:06:20.779 "c2h_success": true, 00:06:20.779 "sock_priority": 0, 00:06:20.779 "abort_timeout_sec": 1, 00:06:20.779 "ack_timeout": 0, 00:06:20.779 "data_wr_pool_size": 0 00:06:20.779 } 00:06:20.779 } 00:06:20.779 ] 00:06:20.779 }, 00:06:20.779 { 00:06:20.779 "subsystem": "iscsi", 00:06:20.779 "config": [ 00:06:20.779 { 00:06:20.779 "method": "iscsi_set_options", 00:06:20.779 "params": { 00:06:20.779 "node_base": "iqn.2016-06.io.spdk", 00:06:20.779 "max_sessions": 128, 00:06:20.779 "max_connections_per_session": 2, 00:06:20.779 "max_queue_depth": 64, 00:06:20.779 "default_time2wait": 2, 00:06:20.779 "default_time2retain": 20, 00:06:20.779 "first_burst_length": 8192, 00:06:20.779 "immediate_data": true, 00:06:20.779 "allow_duplicated_isid": false, 00:06:20.779 "error_recovery_level": 0, 00:06:20.779 "nop_timeout": 60, 00:06:20.779 "nop_in_interval": 30, 00:06:20.779 "disable_chap": false, 00:06:20.779 "require_chap": false, 00:06:20.779 "mutual_chap": false, 00:06:20.779 "chap_group": 0, 00:06:20.779 "max_large_datain_per_connection": 64, 00:06:20.779 "max_r2t_per_connection": 4, 00:06:20.779 "pdu_pool_size": 36864, 00:06:20.779 "immediate_data_pool_size": 16384, 00:06:20.779 "data_out_pool_size": 2048 00:06:20.779 } 00:06:20.779 } 00:06:20.779 ] 00:06:20.779 } 00:06:20.779 ] 00:06:20.779 } 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70970 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70970 ']' 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70970 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.779 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70970 00:06:21.038 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.038 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.038 killing process with pid 70970 00:06:21.038 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70970' 00:06:21.038 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70970 00:06:21.038 18:46:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70970 00:06:21.297 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70999 00:06:21.298 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.298 18:46:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70999 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70999 ']' 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70999 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70999 00:06:26.571 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.572 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.572 killing process with pid 70999 00:06:26.572 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70999' 00:06:26.572 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70999 00:06:26.572 18:46:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70999 00:06:26.572 18:46:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:26.572 18:46:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:26.832 00:06:26.832 real 0m6.911s 00:06:26.832 user 0m6.445s 00:06:26.832 sys 0m0.740s 00:06:26.832 18:46:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.832 18:46:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.832 ************************************ 00:06:26.832 END TEST skip_rpc_with_json 00:06:26.832 ************************************ 00:06:26.832 18:46:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:26.832 18:46:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.832 18:46:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.832 18:46:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.832 ************************************ 00:06:26.832 START TEST skip_rpc_with_delay 00:06:26.832 ************************************ 00:06:26.832 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:26.832 18:46:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.832 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.833 [2024-11-28 18:46:56.348930] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.833 00:06:26.833 real 0m0.168s 00:06:26.833 user 0m0.084s 00:06:26.833 sys 0m0.083s 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.833 18:46:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:26.833 ************************************ 00:06:26.833 END TEST skip_rpc_with_delay 00:06:26.833 ************************************ 00:06:27.093 18:46:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:27.093 18:46:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:27.093 18:46:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:27.093 18:46:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.093 18:46:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.093 18:46:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.093 ************************************ 00:06:27.093 START TEST exit_on_failed_rpc_init 00:06:27.093 ************************************ 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71105 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71105 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71105 ']' 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.093 18:46:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.093 [2024-11-28 18:46:56.587571] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:27.093 [2024-11-28 18:46:56.587699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71105 ] 00:06:27.353 [2024-11-28 18:46:56.723115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.353 [2024-11-28 18:46:56.763510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.353 [2024-11-28 18:46:56.788341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:27.922 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.922 [2024-11-28 18:46:57.477704] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:27.922 [2024-11-28 18:46:57.477808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71123 ] 00:06:28.182 [2024-11-28 18:46:57.613533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.182 [2024-11-28 18:46:57.654242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.182 [2024-11-28 18:46:57.681094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.182 [2024-11-28 18:46:57.681193] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:28.182 [2024-11-28 18:46:57.681205] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:28.182 [2024-11-28 18:46:57.681221] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71105 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71105 ']' 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71105 00:06:28.182 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71105 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.442 killing process with pid 71105 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71105' 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71105 00:06:28.442 18:46:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71105 00:06:28.701 00:06:28.701 real 0m1.692s 00:06:28.701 user 0m1.791s 00:06:28.701 sys 0m0.495s 00:06:28.701 18:46:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.701 18:46:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.701 ************************************ 00:06:28.701 END TEST exit_on_failed_rpc_init 00:06:28.701 ************************************ 00:06:28.701 18:46:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.701 00:06:28.701 real 0m14.703s 00:06:28.701 user 0m13.554s 00:06:28.701 sys 0m1.951s 00:06:28.701 18:46:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.701 18:46:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.701 ************************************ 00:06:28.701 END TEST skip_rpc 00:06:28.701 ************************************ 00:06:28.701 18:46:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:28.701 18:46:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.701 18:46:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.701 18:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.961 ************************************ 00:06:28.961 START TEST rpc_client 00:06:28.961 ************************************ 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:28.961 * Looking for test storage... 00:06:28.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.961 18:46:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.961 --rc genhtml_branch_coverage=1 00:06:28.961 --rc genhtml_function_coverage=1 00:06:28.961 --rc genhtml_legend=1 00:06:28.961 --rc geninfo_all_blocks=1 00:06:28.961 --rc geninfo_unexecuted_blocks=1 00:06:28.961 00:06:28.961 ' 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.961 --rc genhtml_branch_coverage=1 00:06:28.961 --rc genhtml_function_coverage=1 00:06:28.961 --rc genhtml_legend=1 00:06:28.961 --rc geninfo_all_blocks=1 00:06:28.961 --rc geninfo_unexecuted_blocks=1 00:06:28.961 00:06:28.961 ' 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.961 --rc genhtml_branch_coverage=1 00:06:28.961 --rc genhtml_function_coverage=1 00:06:28.961 --rc genhtml_legend=1 00:06:28.961 --rc geninfo_all_blocks=1 00:06:28.961 --rc geninfo_unexecuted_blocks=1 00:06:28.961 00:06:28.961 ' 00:06:28.961 18:46:58 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.961 --rc genhtml_branch_coverage=1 00:06:28.961 --rc genhtml_function_coverage=1 00:06:28.962 --rc genhtml_legend=1 00:06:28.962 --rc geninfo_all_blocks=1 00:06:28.962 --rc geninfo_unexecuted_blocks=1 00:06:28.962 00:06:28.962 ' 00:06:28.962 18:46:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:29.221 OK 00:06:29.221 18:46:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:29.221 00:06:29.221 real 0m0.305s 00:06:29.221 user 0m0.181s 00:06:29.221 sys 0m0.143s 00:06:29.221 18:46:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.221 18:46:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:29.221 ************************************ 00:06:29.221 END TEST rpc_client 00:06:29.221 ************************************ 00:06:29.221 18:46:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:29.221 18:46:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.221 18:46:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.221 18:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:29.221 ************************************ 00:06:29.221 START TEST json_config 00:06:29.221 ************************************ 00:06:29.221 18:46:58 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:29.221 18:46:58 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.221 18:46:58 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.221 18:46:58 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.482 18:46:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.482 18:46:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.482 18:46:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.482 18:46:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.482 18:46:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.482 18:46:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:29.482 18:46:58 json_config -- scripts/common.sh@345 -- # : 1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.482 18:46:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.482 18:46:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@353 -- # local d=1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.482 18:46:58 json_config -- scripts/common.sh@355 -- # echo 1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.482 18:46:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@353 -- # local d=2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.482 18:46:58 json_config -- scripts/common.sh@355 -- # echo 2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.482 18:46:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.482 18:46:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.482 18:46:58 json_config -- scripts/common.sh@368 -- # return 0 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.482 --rc genhtml_branch_coverage=1 00:06:29.482 --rc genhtml_function_coverage=1 00:06:29.482 --rc genhtml_legend=1 00:06:29.482 --rc geninfo_all_blocks=1 00:06:29.482 --rc geninfo_unexecuted_blocks=1 00:06:29.482 00:06:29.482 ' 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.482 --rc genhtml_branch_coverage=1 00:06:29.482 --rc genhtml_function_coverage=1 00:06:29.482 --rc genhtml_legend=1 00:06:29.482 --rc geninfo_all_blocks=1 00:06:29.482 --rc geninfo_unexecuted_blocks=1 00:06:29.482 00:06:29.482 ' 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.482 --rc genhtml_branch_coverage=1 00:06:29.482 --rc genhtml_function_coverage=1 00:06:29.482 --rc genhtml_legend=1 00:06:29.482 --rc geninfo_all_blocks=1 00:06:29.482 --rc geninfo_unexecuted_blocks=1 00:06:29.482 00:06:29.482 ' 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.482 --rc genhtml_branch_coverage=1 00:06:29.482 --rc genhtml_function_coverage=1 00:06:29.482 --rc genhtml_legend=1 00:06:29.482 --rc geninfo_all_blocks=1 00:06:29.482 --rc geninfo_unexecuted_blocks=1 00:06:29.482 00:06:29.482 ' 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.482 18:46:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.482 18:46:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.482 18:46:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.482 18:46:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.482 18:46:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.482 18:46:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.482 18:46:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.482 18:46:58 json_config -- paths/export.sh@5 -- # export PATH 00:06:29.482 18:46:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@51 -- # : 0 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.482 18:46:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:29.482 WARNING: No tests are enabled so not running JSON configuration tests 00:06:29.482 18:46:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:29.482 00:06:29.482 real 0m0.228s 00:06:29.482 user 0m0.139s 00:06:29.482 sys 0m0.092s 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.482 18:46:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.482 ************************************ 00:06:29.482 END TEST json_config 00:06:29.482 ************************************ 00:06:29.482 18:46:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:29.482 18:46:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.482 18:46:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.482 18:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:29.482 ************************************ 00:06:29.482 START TEST json_config_extra_key 00:06:29.482 ************************************ 00:06:29.482 18:46:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:29.482 18:46:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.482 18:46:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.483 18:46:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.742 --rc genhtml_branch_coverage=1 00:06:29.742 --rc genhtml_function_coverage=1 00:06:29.742 --rc genhtml_legend=1 00:06:29.742 --rc geninfo_all_blocks=1 00:06:29.742 --rc geninfo_unexecuted_blocks=1 00:06:29.742 00:06:29.742 ' 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.742 --rc genhtml_branch_coverage=1 00:06:29.742 --rc genhtml_function_coverage=1 00:06:29.742 --rc genhtml_legend=1 00:06:29.742 --rc geninfo_all_blocks=1 00:06:29.742 --rc geninfo_unexecuted_blocks=1 00:06:29.742 00:06:29.742 ' 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.742 --rc genhtml_branch_coverage=1 00:06:29.742 --rc genhtml_function_coverage=1 00:06:29.742 --rc genhtml_legend=1 00:06:29.742 --rc geninfo_all_blocks=1 00:06:29.742 --rc geninfo_unexecuted_blocks=1 00:06:29.742 00:06:29.742 ' 00:06:29.742 18:46:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.742 --rc genhtml_branch_coverage=1 00:06:29.742 --rc genhtml_function_coverage=1 00:06:29.742 --rc genhtml_legend=1 00:06:29.742 --rc geninfo_all_blocks=1 00:06:29.742 --rc geninfo_unexecuted_blocks=1 00:06:29.742 00:06:29.742 ' 00:06:29.742 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bc4a61d5-b373-4ac2-b454-18cb5da06a10 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.742 18:46:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.742 18:46:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.742 18:46:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.742 18:46:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.742 18:46:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.742 18:46:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:29.743 18:46:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.743 18:46:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:29.743 INFO: launching applications... 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:29.743 18:46:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71311 00:06:29.743 Waiting for target to run... 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71311 /var/tmp/spdk_tgt.sock 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71311 ']' 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.743 18:46:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.743 18:46:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:29.743 [2024-11-28 18:46:59.290922] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:29.743 [2024-11-28 18:46:59.291060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71311 ] 00:06:30.311 [2024-11-28 18:46:59.631459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.311 [2024-11-28 18:46:59.668094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.311 [2024-11-28 18:46:59.684576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.570 18:47:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.570 18:47:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:30.570 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:30.570 INFO: shutting down applications... 00:06:30.570 18:47:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:30.570 18:47:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71311 ]] 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71311 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71311 00:06:30.570 18:47:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71311 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:31.138 SPDK target shutdown done 00:06:31.138 18:47:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:31.138 Success 00:06:31.138 18:47:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:31.138 00:06:31.138 real 0m1.632s 00:06:31.138 user 0m1.327s 00:06:31.138 sys 0m0.472s 00:06:31.138 18:47:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.138 18:47:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.138 ************************************ 00:06:31.138 END TEST json_config_extra_key 00:06:31.138 ************************************ 00:06:31.138 18:47:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.138 18:47:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.138 18:47:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.138 18:47:00 -- common/autotest_common.sh@10 -- # set +x 00:06:31.138 ************************************ 00:06:31.138 START TEST alias_rpc 00:06:31.138 ************************************ 00:06:31.138 18:47:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.398 * Looking for test storage... 00:06:31.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.398 18:47:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.398 --rc genhtml_branch_coverage=1 00:06:31.398 --rc genhtml_function_coverage=1 00:06:31.398 --rc genhtml_legend=1 00:06:31.398 --rc geninfo_all_blocks=1 00:06:31.398 --rc geninfo_unexecuted_blocks=1 00:06:31.398 00:06:31.398 ' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.398 --rc genhtml_branch_coverage=1 00:06:31.398 --rc genhtml_function_coverage=1 00:06:31.398 --rc genhtml_legend=1 00:06:31.398 --rc geninfo_all_blocks=1 00:06:31.398 --rc geninfo_unexecuted_blocks=1 00:06:31.398 00:06:31.398 ' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.398 --rc genhtml_branch_coverage=1 00:06:31.398 --rc genhtml_function_coverage=1 00:06:31.398 --rc genhtml_legend=1 00:06:31.398 --rc geninfo_all_blocks=1 00:06:31.398 --rc geninfo_unexecuted_blocks=1 00:06:31.398 00:06:31.398 ' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.398 --rc genhtml_branch_coverage=1 00:06:31.398 --rc genhtml_function_coverage=1 00:06:31.398 --rc genhtml_legend=1 00:06:31.398 --rc geninfo_all_blocks=1 00:06:31.398 --rc geninfo_unexecuted_blocks=1 00:06:31.398 00:06:31.398 ' 00:06:31.398 18:47:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.398 18:47:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71390 00:06:31.398 18:47:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71390 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71390 ']' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.398 18:47:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.398 18:47:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.398 [2024-11-28 18:47:00.976256] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:31.398 [2024-11-28 18:47:00.976433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71390 ] 00:06:31.657 [2024-11-28 18:47:01.111371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.657 [2024-11-28 18:47:01.151056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.657 [2024-11-28 18:47:01.176285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.226 18:47:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.226 18:47:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.226 18:47:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:32.485 18:47:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71390 00:06:32.485 18:47:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71390 ']' 00:06:32.485 18:47:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71390 00:06:32.485 18:47:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:32.485 18:47:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.485 18:47:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71390 00:06:32.485 18:47:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.485 18:47:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.485 killing process with pid 71390 00:06:32.485 18:47:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71390' 00:06:32.485 18:47:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 71390 00:06:32.485 18:47:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 71390 00:06:33.126 00:06:33.126 real 0m1.713s 00:06:33.126 user 0m1.722s 00:06:33.126 sys 0m0.486s 00:06:33.126 18:47:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.126 18:47:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.126 ************************************ 00:06:33.126 END TEST alias_rpc 00:06:33.126 ************************************ 00:06:33.126 18:47:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:33.126 18:47:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:33.126 18:47:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.126 18:47:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.126 18:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:33.126 ************************************ 00:06:33.126 START TEST spdkcli_tcp 00:06:33.126 ************************************ 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:33.126 * Looking for test storage... 00:06:33.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.126 18:47:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.126 --rc genhtml_branch_coverage=1 00:06:33.126 --rc genhtml_function_coverage=1 00:06:33.126 --rc genhtml_legend=1 00:06:33.126 --rc geninfo_all_blocks=1 00:06:33.126 --rc geninfo_unexecuted_blocks=1 00:06:33.126 00:06:33.126 ' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.126 --rc genhtml_branch_coverage=1 00:06:33.126 --rc genhtml_function_coverage=1 00:06:33.126 --rc genhtml_legend=1 00:06:33.126 --rc geninfo_all_blocks=1 00:06:33.126 --rc geninfo_unexecuted_blocks=1 00:06:33.126 00:06:33.126 ' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.126 --rc genhtml_branch_coverage=1 00:06:33.126 --rc genhtml_function_coverage=1 00:06:33.126 --rc genhtml_legend=1 00:06:33.126 --rc geninfo_all_blocks=1 00:06:33.126 --rc geninfo_unexecuted_blocks=1 00:06:33.126 00:06:33.126 ' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.126 --rc genhtml_branch_coverage=1 00:06:33.126 --rc genhtml_function_coverage=1 00:06:33.126 --rc genhtml_legend=1 00:06:33.126 --rc geninfo_all_blocks=1 00:06:33.126 --rc geninfo_unexecuted_blocks=1 00:06:33.126 00:06:33.126 ' 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71464 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:33.126 18:47:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71464 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.126 18:47:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.386 [2024-11-28 18:47:02.767983] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:33.386 [2024-11-28 18:47:02.768115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:06:33.386 [2024-11-28 18:47:02.903319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.386 [2024-11-28 18:47:02.941659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.386 [2024-11-28 18:47:02.968755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.386 [2024-11-28 18:47:02.968861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.324 18:47:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.324 18:47:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:34.324 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71481 00:06:34.324 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:34.324 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:34.324 [ 00:06:34.324 "bdev_malloc_delete", 00:06:34.324 "bdev_malloc_create", 00:06:34.324 "bdev_null_resize", 00:06:34.324 "bdev_null_delete", 00:06:34.324 "bdev_null_create", 00:06:34.324 "bdev_nvme_cuse_unregister", 00:06:34.324 "bdev_nvme_cuse_register", 00:06:34.324 "bdev_opal_new_user", 00:06:34.324 "bdev_opal_set_lock_state", 00:06:34.324 "bdev_opal_delete", 00:06:34.324 "bdev_opal_get_info", 00:06:34.324 "bdev_opal_create", 00:06:34.324 "bdev_nvme_opal_revert", 00:06:34.324 "bdev_nvme_opal_init", 00:06:34.324 "bdev_nvme_send_cmd", 00:06:34.324 "bdev_nvme_set_keys", 00:06:34.324 "bdev_nvme_get_path_iostat", 00:06:34.324 "bdev_nvme_get_mdns_discovery_info", 00:06:34.324 "bdev_nvme_stop_mdns_discovery", 00:06:34.324 "bdev_nvme_start_mdns_discovery", 00:06:34.324 "bdev_nvme_set_multipath_policy", 00:06:34.324 "bdev_nvme_set_preferred_path", 00:06:34.324 "bdev_nvme_get_io_paths", 00:06:34.324 "bdev_nvme_remove_error_injection", 00:06:34.324 "bdev_nvme_add_error_injection", 00:06:34.324 "bdev_nvme_get_discovery_info", 00:06:34.324 "bdev_nvme_stop_discovery", 00:06:34.324 "bdev_nvme_start_discovery", 00:06:34.324 "bdev_nvme_get_controller_health_info", 00:06:34.325 "bdev_nvme_disable_controller", 00:06:34.325 "bdev_nvme_enable_controller", 00:06:34.325 "bdev_nvme_reset_controller", 00:06:34.325 "bdev_nvme_get_transport_statistics", 00:06:34.325 "bdev_nvme_apply_firmware", 00:06:34.325 "bdev_nvme_detach_controller", 00:06:34.325 "bdev_nvme_get_controllers", 00:06:34.325 "bdev_nvme_attach_controller", 00:06:34.325 "bdev_nvme_set_hotplug", 00:06:34.325 "bdev_nvme_set_options", 00:06:34.325 "bdev_passthru_delete", 00:06:34.325 "bdev_passthru_create", 00:06:34.325 "bdev_lvol_set_parent_bdev", 00:06:34.325 "bdev_lvol_set_parent", 00:06:34.325 "bdev_lvol_check_shallow_copy", 00:06:34.325 "bdev_lvol_start_shallow_copy", 00:06:34.325 "bdev_lvol_grow_lvstore", 00:06:34.325 "bdev_lvol_get_lvols", 00:06:34.325 "bdev_lvol_get_lvstores", 00:06:34.325 "bdev_lvol_delete", 00:06:34.325 "bdev_lvol_set_read_only", 00:06:34.325 "bdev_lvol_resize", 00:06:34.325 "bdev_lvol_decouple_parent", 00:06:34.325 "bdev_lvol_inflate", 00:06:34.325 "bdev_lvol_rename", 00:06:34.325 "bdev_lvol_clone_bdev", 00:06:34.325 "bdev_lvol_clone", 00:06:34.325 "bdev_lvol_snapshot", 00:06:34.325 "bdev_lvol_create", 00:06:34.325 "bdev_lvol_delete_lvstore", 00:06:34.325 "bdev_lvol_rename_lvstore", 00:06:34.325 "bdev_lvol_create_lvstore", 00:06:34.325 "bdev_raid_set_options", 00:06:34.325 "bdev_raid_remove_base_bdev", 00:06:34.325 "bdev_raid_add_base_bdev", 00:06:34.325 "bdev_raid_delete", 00:06:34.325 "bdev_raid_create", 00:06:34.325 "bdev_raid_get_bdevs", 00:06:34.325 "bdev_error_inject_error", 00:06:34.325 "bdev_error_delete", 00:06:34.325 "bdev_error_create", 00:06:34.325 "bdev_split_delete", 00:06:34.325 "bdev_split_create", 00:06:34.325 "bdev_delay_delete", 00:06:34.325 "bdev_delay_create", 00:06:34.325 "bdev_delay_update_latency", 00:06:34.325 "bdev_zone_block_delete", 00:06:34.325 "bdev_zone_block_create", 00:06:34.325 "blobfs_create", 00:06:34.325 "blobfs_detect", 00:06:34.325 "blobfs_set_cache_size", 00:06:34.325 "bdev_aio_delete", 00:06:34.325 "bdev_aio_rescan", 00:06:34.325 "bdev_aio_create", 00:06:34.325 "bdev_ftl_set_property", 00:06:34.325 "bdev_ftl_get_properties", 00:06:34.325 "bdev_ftl_get_stats", 00:06:34.325 "bdev_ftl_unmap", 00:06:34.325 "bdev_ftl_unload", 00:06:34.325 "bdev_ftl_delete", 00:06:34.325 "bdev_ftl_load", 00:06:34.325 "bdev_ftl_create", 00:06:34.325 "bdev_virtio_attach_controller", 00:06:34.325 "bdev_virtio_scsi_get_devices", 00:06:34.325 "bdev_virtio_detach_controller", 00:06:34.325 "bdev_virtio_blk_set_hotplug", 00:06:34.325 "bdev_iscsi_delete", 00:06:34.325 "bdev_iscsi_create", 00:06:34.325 "bdev_iscsi_set_options", 00:06:34.325 "accel_error_inject_error", 00:06:34.325 "ioat_scan_accel_module", 00:06:34.325 "dsa_scan_accel_module", 00:06:34.325 "iaa_scan_accel_module", 00:06:34.325 "keyring_file_remove_key", 00:06:34.325 "keyring_file_add_key", 00:06:34.325 "keyring_linux_set_options", 00:06:34.325 "fsdev_aio_delete", 00:06:34.325 "fsdev_aio_create", 00:06:34.325 "iscsi_get_histogram", 00:06:34.325 "iscsi_enable_histogram", 00:06:34.325 "iscsi_set_options", 00:06:34.325 "iscsi_get_auth_groups", 00:06:34.325 "iscsi_auth_group_remove_secret", 00:06:34.325 "iscsi_auth_group_add_secret", 00:06:34.325 "iscsi_delete_auth_group", 00:06:34.325 "iscsi_create_auth_group", 00:06:34.325 "iscsi_set_discovery_auth", 00:06:34.325 "iscsi_get_options", 00:06:34.325 "iscsi_target_node_request_logout", 00:06:34.325 "iscsi_target_node_set_redirect", 00:06:34.325 "iscsi_target_node_set_auth", 00:06:34.325 "iscsi_target_node_add_lun", 00:06:34.325 "iscsi_get_stats", 00:06:34.325 "iscsi_get_connections", 00:06:34.325 "iscsi_portal_group_set_auth", 00:06:34.325 "iscsi_start_portal_group", 00:06:34.325 "iscsi_delete_portal_group", 00:06:34.325 "iscsi_create_portal_group", 00:06:34.325 "iscsi_get_portal_groups", 00:06:34.325 "iscsi_delete_target_node", 00:06:34.325 "iscsi_target_node_remove_pg_ig_maps", 00:06:34.325 "iscsi_target_node_add_pg_ig_maps", 00:06:34.325 "iscsi_create_target_node", 00:06:34.325 "iscsi_get_target_nodes", 00:06:34.325 "iscsi_delete_initiator_group", 00:06:34.325 "iscsi_initiator_group_remove_initiators", 00:06:34.325 "iscsi_initiator_group_add_initiators", 00:06:34.325 "iscsi_create_initiator_group", 00:06:34.325 "iscsi_get_initiator_groups", 00:06:34.325 "nvmf_set_crdt", 00:06:34.325 "nvmf_set_config", 00:06:34.325 "nvmf_set_max_subsystems", 00:06:34.325 "nvmf_stop_mdns_prr", 00:06:34.325 "nvmf_publish_mdns_prr", 00:06:34.325 "nvmf_subsystem_get_listeners", 00:06:34.325 "nvmf_subsystem_get_qpairs", 00:06:34.325 "nvmf_subsystem_get_controllers", 00:06:34.325 "nvmf_get_stats", 00:06:34.325 "nvmf_get_transports", 00:06:34.325 "nvmf_create_transport", 00:06:34.325 "nvmf_get_targets", 00:06:34.325 "nvmf_delete_target", 00:06:34.325 "nvmf_create_target", 00:06:34.325 "nvmf_subsystem_allow_any_host", 00:06:34.325 "nvmf_subsystem_set_keys", 00:06:34.325 "nvmf_subsystem_remove_host", 00:06:34.325 "nvmf_subsystem_add_host", 00:06:34.325 "nvmf_ns_remove_host", 00:06:34.325 "nvmf_ns_add_host", 00:06:34.325 "nvmf_subsystem_remove_ns", 00:06:34.325 "nvmf_subsystem_set_ns_ana_group", 00:06:34.325 "nvmf_subsystem_add_ns", 00:06:34.325 "nvmf_subsystem_listener_set_ana_state", 00:06:34.325 "nvmf_discovery_get_referrals", 00:06:34.325 "nvmf_discovery_remove_referral", 00:06:34.325 "nvmf_discovery_add_referral", 00:06:34.325 "nvmf_subsystem_remove_listener", 00:06:34.325 "nvmf_subsystem_add_listener", 00:06:34.325 "nvmf_delete_subsystem", 00:06:34.325 "nvmf_create_subsystem", 00:06:34.325 "nvmf_get_subsystems", 00:06:34.325 "env_dpdk_get_mem_stats", 00:06:34.325 "nbd_get_disks", 00:06:34.325 "nbd_stop_disk", 00:06:34.325 "nbd_start_disk", 00:06:34.325 "ublk_recover_disk", 00:06:34.325 "ublk_get_disks", 00:06:34.325 "ublk_stop_disk", 00:06:34.325 "ublk_start_disk", 00:06:34.325 "ublk_destroy_target", 00:06:34.325 "ublk_create_target", 00:06:34.325 "virtio_blk_create_transport", 00:06:34.325 "virtio_blk_get_transports", 00:06:34.325 "vhost_controller_set_coalescing", 00:06:34.325 "vhost_get_controllers", 00:06:34.325 "vhost_delete_controller", 00:06:34.325 "vhost_create_blk_controller", 00:06:34.325 "vhost_scsi_controller_remove_target", 00:06:34.325 "vhost_scsi_controller_add_target", 00:06:34.325 "vhost_start_scsi_controller", 00:06:34.325 "vhost_create_scsi_controller", 00:06:34.325 "thread_set_cpumask", 00:06:34.325 "scheduler_set_options", 00:06:34.325 "framework_get_governor", 00:06:34.325 "framework_get_scheduler", 00:06:34.325 "framework_set_scheduler", 00:06:34.325 "framework_get_reactors", 00:06:34.325 "thread_get_io_channels", 00:06:34.325 "thread_get_pollers", 00:06:34.325 "thread_get_stats", 00:06:34.325 "framework_monitor_context_switch", 00:06:34.325 "spdk_kill_instance", 00:06:34.325 "log_enable_timestamps", 00:06:34.325 "log_get_flags", 00:06:34.325 "log_clear_flag", 00:06:34.325 "log_set_flag", 00:06:34.325 "log_get_level", 00:06:34.325 "log_set_level", 00:06:34.325 "log_get_print_level", 00:06:34.325 "log_set_print_level", 00:06:34.325 "framework_enable_cpumask_locks", 00:06:34.325 "framework_disable_cpumask_locks", 00:06:34.325 "framework_wait_init", 00:06:34.325 "framework_start_init", 00:06:34.325 "scsi_get_devices", 00:06:34.325 "bdev_get_histogram", 00:06:34.325 "bdev_enable_histogram", 00:06:34.325 "bdev_set_qos_limit", 00:06:34.325 "bdev_set_qd_sampling_period", 00:06:34.325 "bdev_get_bdevs", 00:06:34.325 "bdev_reset_iostat", 00:06:34.325 "bdev_get_iostat", 00:06:34.325 "bdev_examine", 00:06:34.325 "bdev_wait_for_examine", 00:06:34.325 "bdev_set_options", 00:06:34.325 "accel_get_stats", 00:06:34.325 "accel_set_options", 00:06:34.325 "accel_set_driver", 00:06:34.325 "accel_crypto_key_destroy", 00:06:34.325 "accel_crypto_keys_get", 00:06:34.325 "accel_crypto_key_create", 00:06:34.325 "accel_assign_opc", 00:06:34.325 "accel_get_module_info", 00:06:34.325 "accel_get_opc_assignments", 00:06:34.325 "vmd_rescan", 00:06:34.325 "vmd_remove_device", 00:06:34.325 "vmd_enable", 00:06:34.325 "sock_get_default_impl", 00:06:34.325 "sock_set_default_impl", 00:06:34.325 "sock_impl_set_options", 00:06:34.325 "sock_impl_get_options", 00:06:34.325 "iobuf_get_stats", 00:06:34.325 "iobuf_set_options", 00:06:34.325 "keyring_get_keys", 00:06:34.325 "framework_get_pci_devices", 00:06:34.325 "framework_get_config", 00:06:34.325 "framework_get_subsystems", 00:06:34.325 "fsdev_set_opts", 00:06:34.325 "fsdev_get_opts", 00:06:34.325 "trace_get_info", 00:06:34.325 "trace_get_tpoint_group_mask", 00:06:34.325 "trace_disable_tpoint_group", 00:06:34.325 "trace_enable_tpoint_group", 00:06:34.325 "trace_clear_tpoint_mask", 00:06:34.325 "trace_set_tpoint_mask", 00:06:34.325 "notify_get_notifications", 00:06:34.325 "notify_get_types", 00:06:34.325 "spdk_get_version", 00:06:34.325 "rpc_get_methods" 00:06:34.325 ] 00:06:34.325 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:34.325 18:47:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71464 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71464 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:34.325 18:47:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.326 killing process with pid 71464 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71464 00:06:34.326 18:47:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71464 00:06:34.894 00:06:34.894 real 0m1.783s 00:06:34.894 user 0m2.953s 00:06:34.894 sys 0m0.567s 00:06:34.894 18:47:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.894 18:47:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.894 ************************************ 00:06:34.894 END TEST spdkcli_tcp 00:06:34.894 ************************************ 00:06:34.894 18:47:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.894 18:47:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.894 18:47:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.894 18:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.894 ************************************ 00:06:34.894 START TEST dpdk_mem_utility 00:06:34.894 ************************************ 00:06:34.894 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.894 * Looking for test storage... 00:06:34.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:34.894 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.894 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.894 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.894 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.894 18:47:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.154 18:47:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.154 --rc genhtml_branch_coverage=1 00:06:35.154 --rc genhtml_function_coverage=1 00:06:35.154 --rc genhtml_legend=1 00:06:35.154 --rc geninfo_all_blocks=1 00:06:35.154 --rc geninfo_unexecuted_blocks=1 00:06:35.154 00:06:35.154 ' 00:06:35.154 18:47:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.154 18:47:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71564 00:06:35.154 18:47:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.154 18:47:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71564 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71564 ']' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.154 18:47:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.154 [2024-11-28 18:47:04.614067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:35.155 [2024-11-28 18:47:04.614194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71564 ] 00:06:35.155 [2024-11-28 18:47:04.748468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.414 [2024-11-28 18:47:04.787539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.414 [2024-11-28 18:47:04.811938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.983 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.983 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:35.983 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:35.983 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:35.983 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.983 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.983 { 00:06:35.983 "filename": "/tmp/spdk_mem_dump.txt" 00:06:35.983 } 00:06:35.983 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.984 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.984 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:35.984 1 heaps totaling size 818.000000 MiB 00:06:35.984 size: 818.000000 MiB heap id: 0 00:06:35.984 end heaps---------- 00:06:35.984 9 mempools totaling size 603.782043 MiB 00:06:35.984 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.984 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.984 size: 100.555481 MiB name: bdev_io_71564 00:06:35.984 size: 50.003479 MiB name: msgpool_71564 00:06:35.984 size: 36.509338 MiB name: fsdev_io_71564 00:06:35.984 size: 21.763794 MiB name: PDU_Pool 00:06:35.984 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.984 size: 4.133484 MiB name: evtpool_71564 00:06:35.984 size: 0.026123 MiB name: Session_Pool 00:06:35.984 end mempools------- 00:06:35.984 6 memzones totaling size 4.142822 MiB 00:06:35.984 size: 1.000366 MiB name: RG_ring_0_71564 00:06:35.984 size: 1.000366 MiB name: RG_ring_1_71564 00:06:35.984 size: 1.000366 MiB name: RG_ring_4_71564 00:06:35.984 size: 1.000366 MiB name: RG_ring_5_71564 00:06:35.984 size: 0.125366 MiB name: RG_ring_2_71564 00:06:35.984 size: 0.015991 MiB name: RG_ring_3_71564 00:06:35.984 end memzones------- 00:06:35.984 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.984 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:35.984 list of free elements. size: 10.943787 MiB 00:06:35.984 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:35.984 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:35.984 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:35.984 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:35.984 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:35.984 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:35.984 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:35.984 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:35.984 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:06:35.984 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:35.984 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:35.984 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:35.984 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:35.984 element at address: 0x200028200000 with size: 0.396301 MiB 00:06:35.984 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:35.984 list of standard malloc elements. size: 199.127319 MiB 00:06:35.984 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:35.984 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:35.984 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:35.984 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:35.984 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:35.984 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:35.984 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:35.984 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:35.984 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:35.984 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:35.984 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:35.985 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200028265740 with size: 0.000183 MiB 00:06:35.985 element at address: 0x200028265800 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c400 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:35.985 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:35.986 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:35.986 list of memzone associated elements. size: 607.928894 MiB 00:06:35.986 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:35.986 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.986 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:35.986 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.986 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:35.986 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71564_0 00:06:35.986 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:35.986 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71564_0 00:06:35.986 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:35.986 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71564_0 00:06:35.986 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:35.986 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.986 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:35.986 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.986 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:35.986 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71564_0 00:06:35.986 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:35.986 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71564 00:06:35.986 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:35.986 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71564 00:06:35.986 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:35.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.986 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:35.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.986 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:35.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.986 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:35.986 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.986 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:35.986 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71564 00:06:35.986 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:35.986 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71564 00:06:35.986 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:35.986 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71564 00:06:35.986 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:35.986 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71564 00:06:35.986 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:35.986 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71564 00:06:35.986 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:35.986 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71564 00:06:35.986 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:35.986 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.986 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:35.986 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.986 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:35.986 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.986 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:35.986 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71564 00:06:35.986 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:35.986 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71564 00:06:35.986 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:35.986 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.986 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:06:35.986 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.986 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:35.986 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71564 00:06:35.986 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:06:35.986 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.986 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:35.986 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71564 00:06:35.986 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:35.986 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71564 00:06:35.986 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:35.986 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71564 00:06:35.986 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:06:35.986 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.986 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.986 18:47:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71564 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71564 ']' 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71564 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71564 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.986 killing process with pid 71564 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71564' 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71564 00:06:35.986 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71564 00:06:36.556 00:06:36.556 real 0m1.634s 00:06:36.556 user 0m1.574s 00:06:36.556 sys 0m0.492s 00:06:36.556 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.556 18:47:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.556 ************************************ 00:06:36.556 END TEST dpdk_mem_utility 00:06:36.556 ************************************ 00:06:36.556 18:47:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.556 18:47:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.556 18:47:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.556 18:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:36.556 ************************************ 00:06:36.556 START TEST event 00:06:36.556 ************************************ 00:06:36.556 18:47:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.556 * Looking for test storage... 00:06:36.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.556 18:47:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.556 18:47:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.556 18:47:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.815 18:47:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.815 18:47:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.815 18:47:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.815 18:47:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.815 18:47:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.815 18:47:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.815 18:47:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.815 18:47:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.815 18:47:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.815 18:47:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.815 18:47:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.815 18:47:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.815 18:47:06 event -- scripts/common.sh@344 -- # case "$op" in 00:06:36.815 18:47:06 event -- scripts/common.sh@345 -- # : 1 00:06:36.815 18:47:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.815 18:47:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.815 18:47:06 event -- scripts/common.sh@365 -- # decimal 1 00:06:36.816 18:47:06 event -- scripts/common.sh@353 -- # local d=1 00:06:36.816 18:47:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.816 18:47:06 event -- scripts/common.sh@355 -- # echo 1 00:06:36.816 18:47:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.816 18:47:06 event -- scripts/common.sh@366 -- # decimal 2 00:06:36.816 18:47:06 event -- scripts/common.sh@353 -- # local d=2 00:06:36.816 18:47:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.816 18:47:06 event -- scripts/common.sh@355 -- # echo 2 00:06:36.816 18:47:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.816 18:47:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.816 18:47:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.816 18:47:06 event -- scripts/common.sh@368 -- # return 0 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.816 --rc genhtml_branch_coverage=1 00:06:36.816 --rc genhtml_function_coverage=1 00:06:36.816 --rc genhtml_legend=1 00:06:36.816 --rc geninfo_all_blocks=1 00:06:36.816 --rc geninfo_unexecuted_blocks=1 00:06:36.816 00:06:36.816 ' 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.816 --rc genhtml_branch_coverage=1 00:06:36.816 --rc genhtml_function_coverage=1 00:06:36.816 --rc genhtml_legend=1 00:06:36.816 --rc geninfo_all_blocks=1 00:06:36.816 --rc geninfo_unexecuted_blocks=1 00:06:36.816 00:06:36.816 ' 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.816 --rc genhtml_branch_coverage=1 00:06:36.816 --rc genhtml_function_coverage=1 00:06:36.816 --rc genhtml_legend=1 00:06:36.816 --rc geninfo_all_blocks=1 00:06:36.816 --rc geninfo_unexecuted_blocks=1 00:06:36.816 00:06:36.816 ' 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.816 --rc genhtml_branch_coverage=1 00:06:36.816 --rc genhtml_function_coverage=1 00:06:36.816 --rc genhtml_legend=1 00:06:36.816 --rc geninfo_all_blocks=1 00:06:36.816 --rc geninfo_unexecuted_blocks=1 00:06:36.816 00:06:36.816 ' 00:06:36.816 18:47:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.816 18:47:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.816 18:47:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:36.816 18:47:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.816 18:47:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.816 ************************************ 00:06:36.816 START TEST event_perf 00:06:36.816 ************************************ 00:06:36.816 18:47:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.816 Running I/O for 1 seconds...[2024-11-28 18:47:06.253592] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:36.816 [2024-11-28 18:47:06.253722] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71650 ] 00:06:36.816 [2024-11-28 18:47:06.386902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.075 [2024-11-28 18:47:06.422696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.075 [2024-11-28 18:47:06.451012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.075 [2024-11-28 18:47:06.451104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.075 [2024-11-28 18:47:06.451165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.075 Running I/O for 1 seconds...[2024-11-28 18:47:06.451242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.013 00:06:38.013 lcore 0: 204484 00:06:38.013 lcore 1: 204482 00:06:38.013 lcore 2: 204482 00:06:38.013 lcore 3: 204484 00:06:38.013 done. 00:06:38.013 00:06:38.013 real 0m1.306s 00:06:38.013 user 0m4.082s 00:06:38.013 sys 0m0.111s 00:06:38.013 18:47:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.013 18:47:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.013 ************************************ 00:06:38.013 END TEST event_perf 00:06:38.013 ************************************ 00:06:38.013 18:47:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.013 18:47:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:38.013 18:47:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.013 18:47:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.013 ************************************ 00:06:38.013 START TEST event_reactor 00:06:38.013 ************************************ 00:06:38.013 18:47:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.274 [2024-11-28 18:47:07.644225] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:38.274 [2024-11-28 18:47:07.644376] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71684 ] 00:06:38.274 [2024-11-28 18:47:07.775914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.274 [2024-11-28 18:47:07.814053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.274 [2024-11-28 18:47:07.838436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.654 test_start 00:06:39.654 oneshot 00:06:39.654 tick 100 00:06:39.654 tick 100 00:06:39.654 tick 250 00:06:39.654 tick 100 00:06:39.654 tick 100 00:06:39.654 tick 100 00:06:39.654 tick 250 00:06:39.654 tick 500 00:06:39.654 tick 100 00:06:39.654 tick 100 00:06:39.654 tick 250 00:06:39.654 tick 100 00:06:39.654 tick 100 00:06:39.654 test_end 00:06:39.654 00:06:39.654 real 0m1.307s 00:06:39.654 user 0m1.118s 00:06:39.654 sys 0m0.082s 00:06:39.654 18:47:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.654 18:47:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:39.654 ************************************ 00:06:39.654 END TEST event_reactor 00:06:39.654 ************************************ 00:06:39.654 18:47:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.654 18:47:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:39.654 18:47:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.654 18:47:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.654 ************************************ 00:06:39.654 START TEST event_reactor_perf 00:06:39.654 ************************************ 00:06:39.654 18:47:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.654 [2024-11-28 18:47:09.021208] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:39.654 [2024-11-28 18:47:09.021331] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:06:39.654 [2024-11-28 18:47:09.153711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.655 [2024-11-28 18:47:09.190779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.655 [2024-11-28 18:47:09.217520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.037 test_start 00:06:41.037 test_end 00:06:41.037 Performance: 410341 events per second 00:06:41.037 00:06:41.037 real 0m1.309s 00:06:41.037 user 0m1.107s 00:06:41.037 sys 0m0.095s 00:06:41.037 18:47:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.037 18:47:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.037 ************************************ 00:06:41.037 END TEST event_reactor_perf 00:06:41.037 ************************************ 00:06:41.037 18:47:10 event -- event/event.sh@49 -- # uname -s 00:06:41.037 18:47:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.037 18:47:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:41.037 18:47:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.037 18:47:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.037 18:47:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.037 ************************************ 00:06:41.037 START TEST event_scheduler 00:06:41.037 ************************************ 00:06:41.037 18:47:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:41.037 * Looking for test storage... 00:06:41.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:41.037 18:47:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.037 18:47:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.037 18:47:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.037 18:47:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:41.037 18:47:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.038 18:47:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.038 --rc genhtml_branch_coverage=1 00:06:41.038 --rc genhtml_function_coverage=1 00:06:41.038 --rc genhtml_legend=1 00:06:41.038 --rc geninfo_all_blocks=1 00:06:41.038 --rc geninfo_unexecuted_blocks=1 00:06:41.038 00:06:41.038 ' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.038 --rc genhtml_branch_coverage=1 00:06:41.038 --rc genhtml_function_coverage=1 00:06:41.038 --rc genhtml_legend=1 00:06:41.038 --rc geninfo_all_blocks=1 00:06:41.038 --rc geninfo_unexecuted_blocks=1 00:06:41.038 00:06:41.038 ' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.038 --rc genhtml_branch_coverage=1 00:06:41.038 --rc genhtml_function_coverage=1 00:06:41.038 --rc genhtml_legend=1 00:06:41.038 --rc geninfo_all_blocks=1 00:06:41.038 --rc geninfo_unexecuted_blocks=1 00:06:41.038 00:06:41.038 ' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.038 --rc genhtml_branch_coverage=1 00:06:41.038 --rc genhtml_function_coverage=1 00:06:41.038 --rc genhtml_legend=1 00:06:41.038 --rc geninfo_all_blocks=1 00:06:41.038 --rc geninfo_unexecuted_blocks=1 00:06:41.038 00:06:41.038 ' 00:06:41.038 18:47:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:41.038 18:47:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71791 00:06:41.038 18:47:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:41.038 18:47:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.038 18:47:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71791 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71791 ']' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.038 18:47:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.298 [2024-11-28 18:47:10.663712] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:41.298 [2024-11-28 18:47:10.663851] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:06:41.298 [2024-11-28 18:47:10.799522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.298 [2024-11-28 18:47:10.834992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.298 [2024-11-28 18:47:10.863169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.298 [2024-11-28 18:47:10.863444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.298 [2024-11-28 18:47:10.863350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.298 [2024-11-28 18:47:10.863637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.867 18:47:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.867 18:47:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:41.867 18:47:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:41.867 18:47:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.867 18:47:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:42.127 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:42.127 POWER: intel_pstate driver is not supported 00:06:42.127 POWER: cppc_cpufreq driver is not supported 00:06:42.127 POWER: amd-pstate driver is not supported 00:06:42.127 POWER: acpi-cpufreq driver is not supported 00:06:42.127 POWER: Unable to set Power Management Environment for lcore 0 00:06:42.127 [2024-11-28 18:47:11.477227] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:42.127 [2024-11-28 18:47:11.477264] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:42.127 [2024-11-28 18:47:11.477276] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.127 [2024-11-28 18:47:11.477294] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.127 [2024-11-28 18:47:11.477319] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.127 [2024-11-28 18:47:11.477338] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 [2024-11-28 18:47:11.554000] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 ************************************ 00:06:42.127 START TEST scheduler_create_thread 00:06:42.127 ************************************ 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 2 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 3 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 4 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 5 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 6 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 7 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 8 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.127 9 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.127 18:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 10 00:06:42.698 18:47:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.698 18:47:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:42.698 18:47:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.698 18:47:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.078 18:47:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.078 18:47:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:44.078 18:47:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:44.078 18:47:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.078 18:47:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.711 18:47:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.711 18:47:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:44.711 18:47:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.711 18:47:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.679 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.679 18:47:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:45.679 18:47:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:45.679 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.679 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.247 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.247 00:06:46.247 real 0m4.216s 00:06:46.247 user 0m0.028s 00:06:46.247 sys 0m0.008s 00:06:46.247 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.247 18:47:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.247 ************************************ 00:06:46.247 END TEST scheduler_create_thread 00:06:46.247 ************************************ 00:06:46.247 18:47:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:46.247 18:47:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71791 00:06:46.247 18:47:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71791 ']' 00:06:46.247 18:47:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71791 00:06:46.247 18:47:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:46.247 18:47:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.247 18:47:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71791 00:06:46.506 18:47:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:46.506 18:47:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:46.506 killing process with pid 71791 00:06:46.506 18:47:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71791' 00:06:46.506 18:47:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71791 00:06:46.506 18:47:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71791 00:06:46.765 [2024-11-28 18:47:16.162507] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.024 00:06:47.024 real 0m6.075s 00:06:47.024 user 0m13.712s 00:06:47.024 sys 0m0.456s 00:06:47.024 18:47:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.024 18:47:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.024 ************************************ 00:06:47.024 END TEST event_scheduler 00:06:47.024 ************************************ 00:06:47.024 18:47:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.024 18:47:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.024 18:47:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.024 18:47:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.024 18:47:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.024 ************************************ 00:06:47.024 START TEST app_repeat 00:06:47.024 ************************************ 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71903 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.024 Process app_repeat pid: 71903 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71903' 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.024 spdk_app_start Round 0 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.024 18:47:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71903 /var/tmp/spdk-nbd.sock 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71903 ']' 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.024 18:47:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.025 [2024-11-28 18:47:16.570031] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:47.025 [2024-11-28 18:47:16.570178] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71903 ] 00:06:47.283 [2024-11-28 18:47:16.704948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.283 [2024-11-28 18:47:16.745017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.283 [2024-11-28 18:47:16.770878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.283 [2024-11-28 18:47:16.770922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.850 18:47:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.850 18:47:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:47.850 18:47:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.109 Malloc0 00:06:48.109 18:47:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.368 Malloc1 00:06:48.368 18:47:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.368 18:47:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.369 18:47:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.627 /dev/nbd0 00:06:48.627 18:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.627 18:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.627 1+0 records in 00:06:48.627 1+0 records out 00:06:48.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420695 s, 9.7 MB/s 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.627 18:47:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:48.627 18:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.627 18:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.627 18:47:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:48.886 /dev/nbd1 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.886 1+0 records in 00:06:48.886 1+0 records out 00:06:48.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363994 s, 11.3 MB/s 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.886 18:47:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.886 18:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.145 { 00:06:49.145 "nbd_device": "/dev/nbd0", 00:06:49.145 "bdev_name": "Malloc0" 00:06:49.145 }, 00:06:49.145 { 00:06:49.145 "nbd_device": "/dev/nbd1", 00:06:49.145 "bdev_name": "Malloc1" 00:06:49.145 } 00:06:49.145 ]' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.145 { 00:06:49.145 "nbd_device": "/dev/nbd0", 00:06:49.145 "bdev_name": "Malloc0" 00:06:49.145 }, 00:06:49.145 { 00:06:49.145 "nbd_device": "/dev/nbd1", 00:06:49.145 "bdev_name": "Malloc1" 00:06:49.145 } 00:06:49.145 ]' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.145 /dev/nbd1' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.145 /dev/nbd1' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.145 256+0 records in 00:06:49.145 256+0 records out 00:06:49.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141533 s, 74.1 MB/s 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.145 256+0 records in 00:06:49.145 256+0 records out 00:06:49.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204203 s, 51.3 MB/s 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.145 256+0 records in 00:06:49.145 256+0 records out 00:06:49.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255592 s, 41.0 MB/s 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.145 18:47:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.404 18:47:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.663 18:47:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.922 18:47:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.922 18:47:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.181 18:47:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.181 [2024-11-28 18:47:19.675902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.181 [2024-11-28 18:47:19.699290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.181 [2024-11-28 18:47:19.699293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.181 [2024-11-28 18:47:19.741546] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.181 [2024-11-28 18:47:19.741604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.467 spdk_app_start Round 1 00:06:53.467 18:47:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.468 18:47:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.468 18:47:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71903 /var/tmp/spdk-nbd.sock 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71903 ']' 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.468 18:47:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:53.468 18:47:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.468 Malloc0 00:06:53.468 18:47:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.726 Malloc1 00:06:53.726 18:47:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.726 18:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.727 18:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.984 /dev/nbd0 00:06:53.984 18:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.984 18:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.984 1+0 records in 00:06:53.984 1+0 records out 00:06:53.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305917 s, 13.4 MB/s 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.984 18:47:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.984 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.984 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.984 18:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.243 /dev/nbd1 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.243 1+0 records in 00:06:54.243 1+0 records out 00:06:54.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297152 s, 13.8 MB/s 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.243 18:47:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.243 18:47:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.502 { 00:06:54.502 "nbd_device": "/dev/nbd0", 00:06:54.502 "bdev_name": "Malloc0" 00:06:54.502 }, 00:06:54.502 { 00:06:54.502 "nbd_device": "/dev/nbd1", 00:06:54.502 "bdev_name": "Malloc1" 00:06:54.502 } 00:06:54.502 ]' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.502 { 00:06:54.502 "nbd_device": "/dev/nbd0", 00:06:54.502 "bdev_name": "Malloc0" 00:06:54.502 }, 00:06:54.502 { 00:06:54.502 "nbd_device": "/dev/nbd1", 00:06:54.502 "bdev_name": "Malloc1" 00:06:54.502 } 00:06:54.502 ]' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.502 /dev/nbd1' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.502 /dev/nbd1' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.502 256+0 records in 00:06:54.502 256+0 records out 00:06:54.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139112 s, 75.4 MB/s 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.502 256+0 records in 00:06:54.502 256+0 records out 00:06:54.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204913 s, 51.2 MB/s 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.502 256+0 records in 00:06:54.502 256+0 records out 00:06:54.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213629 s, 49.1 MB/s 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.502 18:47:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.503 18:47:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.762 18:47:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.021 18:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.280 18:47:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.280 18:47:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.280 18:47:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.539 [2024-11-28 18:47:24.996115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.539 [2024-11-28 18:47:25.020837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.539 [2024-11-28 18:47:25.020860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.539 [2024-11-28 18:47:25.062857] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.539 [2024-11-28 18:47:25.062924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.895 spdk_app_start Round 2 00:06:58.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.895 18:47:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.895 18:47:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:58.895 18:47:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71903 /var/tmp/spdk-nbd.sock 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71903 ']' 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.895 18:47:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.895 18:47:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.895 18:47:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.895 18:47:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.895 Malloc0 00:06:58.895 18:47:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.895 Malloc1 00:06:58.895 18:47:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.895 18:47:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.155 /dev/nbd0 00:06:59.155 18:47:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.155 18:47:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.155 1+0 records in 00:06:59.155 1+0 records out 00:06:59.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018565 s, 22.1 MB/s 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.155 18:47:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.155 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.155 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.155 18:47:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.414 /dev/nbd1 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.414 1+0 records in 00:06:59.414 1+0 records out 00:06:59.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020904 s, 19.6 MB/s 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.414 18:47:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.414 18:47:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.674 { 00:06:59.674 "nbd_device": "/dev/nbd0", 00:06:59.674 "bdev_name": "Malloc0" 00:06:59.674 }, 00:06:59.674 { 00:06:59.674 "nbd_device": "/dev/nbd1", 00:06:59.674 "bdev_name": "Malloc1" 00:06:59.674 } 00:06:59.674 ]' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.674 { 00:06:59.674 "nbd_device": "/dev/nbd0", 00:06:59.674 "bdev_name": "Malloc0" 00:06:59.674 }, 00:06:59.674 { 00:06:59.674 "nbd_device": "/dev/nbd1", 00:06:59.674 "bdev_name": "Malloc1" 00:06:59.674 } 00:06:59.674 ]' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.674 /dev/nbd1' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.674 /dev/nbd1' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.674 256+0 records in 00:06:59.674 256+0 records out 00:06:59.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540254 s, 194 MB/s 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.674 256+0 records in 00:06:59.674 256+0 records out 00:06:59.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159161 s, 65.9 MB/s 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.674 256+0 records in 00:06:59.674 256+0 records out 00:06:59.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206142 s, 50.9 MB/s 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.674 18:47:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.933 18:47:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.191 18:47:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.448 18:47:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.448 18:47:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.706 18:47:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.706 [2024-11-28 18:47:30.291721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.965 [2024-11-28 18:47:30.315654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.965 [2024-11-28 18:47:30.315656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.965 [2024-11-28 18:47:30.357344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.965 [2024-11-28 18:47:30.357408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.253 18:47:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71903 /var/tmp/spdk-nbd.sock 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71903 ']' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.253 18:47:33 event.app_repeat -- event/event.sh@39 -- # killprocess 71903 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71903 ']' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71903 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71903 00:07:04.253 killing process with pid 71903 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71903' 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71903 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71903 00:07:04.253 spdk_app_start is called in Round 0. 00:07:04.253 Shutdown signal received, stop current app iteration 00:07:04.253 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:04.253 spdk_app_start is called in Round 1. 00:07:04.253 Shutdown signal received, stop current app iteration 00:07:04.253 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:04.253 spdk_app_start is called in Round 2. 00:07:04.253 Shutdown signal received, stop current app iteration 00:07:04.253 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:04.253 spdk_app_start is called in Round 3. 00:07:04.253 Shutdown signal received, stop current app iteration 00:07:04.253 18:47:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:04.253 18:47:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:04.253 00:07:04.253 real 0m17.079s 00:07:04.253 user 0m37.575s 00:07:04.253 sys 0m2.626s 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.253 18:47:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 ************************************ 00:07:04.253 END TEST app_repeat 00:07:04.253 ************************************ 00:07:04.253 18:47:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:04.253 18:47:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:04.253 18:47:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.253 18:47:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.253 18:47:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 ************************************ 00:07:04.253 START TEST cpu_locks 00:07:04.253 ************************************ 00:07:04.253 18:47:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:04.253 * Looking for test storage... 00:07:04.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:04.253 18:47:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.253 18:47:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.253 18:47:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.513 18:47:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:04.513 18:47:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.514 18:47:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.514 --rc genhtml_branch_coverage=1 00:07:04.514 --rc genhtml_function_coverage=1 00:07:04.514 --rc genhtml_legend=1 00:07:04.514 --rc geninfo_all_blocks=1 00:07:04.514 --rc geninfo_unexecuted_blocks=1 00:07:04.514 00:07:04.514 ' 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.514 --rc genhtml_branch_coverage=1 00:07:04.514 --rc genhtml_function_coverage=1 00:07:04.514 --rc genhtml_legend=1 00:07:04.514 --rc geninfo_all_blocks=1 00:07:04.514 --rc geninfo_unexecuted_blocks=1 00:07:04.514 00:07:04.514 ' 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.514 --rc genhtml_branch_coverage=1 00:07:04.514 --rc genhtml_function_coverage=1 00:07:04.514 --rc genhtml_legend=1 00:07:04.514 --rc geninfo_all_blocks=1 00:07:04.514 --rc geninfo_unexecuted_blocks=1 00:07:04.514 00:07:04.514 ' 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.514 --rc genhtml_branch_coverage=1 00:07:04.514 --rc genhtml_function_coverage=1 00:07:04.514 --rc genhtml_legend=1 00:07:04.514 --rc geninfo_all_blocks=1 00:07:04.514 --rc geninfo_unexecuted_blocks=1 00:07:04.514 00:07:04.514 ' 00:07:04.514 18:47:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:04.514 18:47:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:04.514 18:47:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:04.514 18:47:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.514 18:47:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 ************************************ 00:07:04.514 START TEST default_locks 00:07:04.514 ************************************ 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72328 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72328 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72328 ']' 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.514 18:47:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 [2024-11-28 18:47:34.016123] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:04.514 [2024-11-28 18:47:34.016374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72328 ] 00:07:04.773 [2024-11-28 18:47:34.159873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.773 [2024-11-28 18:47:34.199006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.773 [2024-11-28 18:47:34.223746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72328 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72328 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72328 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72328 ']' 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72328 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.341 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72328 00:07:05.601 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.601 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.601 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72328' 00:07:05.601 killing process with pid 72328 00:07:05.601 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72328 00:07:05.601 18:47:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72328 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72328 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72328 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:05.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.860 ERROR: process (pid: 72328) is no longer running 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72328 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72328 ']' 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72328) - No such process 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.860 00:07:05.860 real 0m1.418s 00:07:05.860 user 0m1.323s 00:07:05.860 sys 0m0.501s 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.860 18:47:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.860 ************************************ 00:07:05.860 END TEST default_locks 00:07:05.860 ************************************ 00:07:05.860 18:47:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:05.860 18:47:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.860 18:47:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.860 18:47:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.860 ************************************ 00:07:05.860 START TEST default_locks_via_rpc 00:07:05.860 ************************************ 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72370 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72370 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72370 ']' 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.860 18:47:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.120 [2024-11-28 18:47:35.493603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:06.120 [2024-11-28 18:47:35.493817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72370 ] 00:07:06.120 [2024-11-28 18:47:35.629198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.120 [2024-11-28 18:47:35.667366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.120 [2024-11-28 18:47:35.691879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72370 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.058 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72370 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72370 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72370 ']' 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72370 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72370 00:07:07.318 killing process with pid 72370 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72370' 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72370 00:07:07.318 18:47:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72370 00:07:07.887 00:07:07.887 real 0m1.848s 00:07:07.887 user 0m1.829s 00:07:07.887 sys 0m0.651s 00:07:07.887 18:47:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.887 ************************************ 00:07:07.887 END TEST default_locks_via_rpc 00:07:07.887 ************************************ 00:07:07.887 18:47:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 18:47:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:07.887 18:47:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.887 18:47:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.887 18:47:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 ************************************ 00:07:07.887 START TEST non_locking_app_on_locked_coremask 00:07:07.887 ************************************ 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72424 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72424 /var/tmp/spdk.sock 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72424 ']' 00:07:07.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.887 18:47:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 [2024-11-28 18:47:37.404770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:07.887 [2024-11-28 18:47:37.404884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72424 ] 00:07:08.146 [2024-11-28 18:47:37.539021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.146 [2024-11-28 18:47:37.555558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.146 [2024-11-28 18:47:37.580329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72441 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72441 /var/tmp/spdk2.sock 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72441 ']' 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.714 18:47:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.714 [2024-11-28 18:47:38.311157] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:08.714 [2024-11-28 18:47:38.311735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72441 ] 00:07:08.973 [2024-11-28 18:47:38.450420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.973 [2024-11-28 18:47:38.482397] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.973 [2024-11-28 18:47:38.482440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.973 [2024-11-28 18:47:38.531922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.542 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.542 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:09.542 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72424 00:07:09.542 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72424 00:07:09.542 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72424 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72424 ']' 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72424 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72424 00:07:10.157 killing process with pid 72424 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72424' 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72424 00:07:10.157 18:47:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72424 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72441 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72441 ']' 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72441 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72441 00:07:11.095 killing process with pid 72441 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72441' 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72441 00:07:11.095 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72441 00:07:11.355 00:07:11.355 real 0m3.463s 00:07:11.355 user 0m3.646s 00:07:11.356 sys 0m1.078s 00:07:11.356 ************************************ 00:07:11.356 END TEST non_locking_app_on_locked_coremask 00:07:11.356 ************************************ 00:07:11.356 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.356 18:47:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.356 18:47:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.356 18:47:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.356 18:47:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.356 18:47:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.356 ************************************ 00:07:11.356 START TEST locking_app_on_unlocked_coremask 00:07:11.356 ************************************ 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72503 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72503 /var/tmp/spdk.sock 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72503 ']' 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.356 18:47:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.356 [2024-11-28 18:47:40.935688] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:11.356 [2024-11-28 18:47:40.935820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72503 ] 00:07:11.616 [2024-11-28 18:47:41.070711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.616 [2024-11-28 18:47:41.107779] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.616 [2024-11-28 18:47:41.107860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.616 [2024-11-28 18:47:41.132360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72515 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72515 /var/tmp/spdk2.sock 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72515 ']' 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.185 18:47:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.444 [2024-11-28 18:47:41.812342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:12.444 [2024-11-28 18:47:41.812575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:07:12.444 [2024-11-28 18:47:41.945955] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.444 [2024-11-28 18:47:41.978100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.444 [2024-11-28 18:47:42.032759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.382 18:47:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.382 18:47:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.382 18:47:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72515 00:07:13.382 18:47:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72515 00:07:13.382 18:47:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72503 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72503 ']' 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72503 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72503 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72503' 00:07:13.950 killing process with pid 72503 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72503 00:07:13.950 18:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72503 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72515 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72515 ']' 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72515 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.520 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72515 00:07:14.779 killing process with pid 72515 00:07:14.779 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.779 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.779 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72515' 00:07:14.779 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72515 00:07:14.779 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72515 00:07:15.038 00:07:15.038 real 0m3.657s 00:07:15.038 user 0m3.809s 00:07:15.038 sys 0m1.150s 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.038 ************************************ 00:07:15.038 END TEST locking_app_on_unlocked_coremask 00:07:15.038 ************************************ 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.038 18:47:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:15.038 18:47:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.038 18:47:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.038 18:47:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.038 ************************************ 00:07:15.038 START TEST locking_app_on_locked_coremask 00:07:15.038 ************************************ 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72584 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72584 /var/tmp/spdk.sock 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72584 ']' 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.038 18:47:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.297 [2024-11-28 18:47:44.662688] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:15.297 [2024-11-28 18:47:44.662917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72584 ] 00:07:15.297 [2024-11-28 18:47:44.797814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.297 [2024-11-28 18:47:44.835702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.297 [2024-11-28 18:47:44.860373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72600 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72600 /var/tmp/spdk2.sock 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72600 /var/tmp/spdk2.sock 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.863 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72600 /var/tmp/spdk2.sock 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72600 ']' 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.121 18:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.121 [2024-11-28 18:47:45.557615] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:16.121 [2024-11-28 18:47:45.557828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72600 ] 00:07:16.121 [2024-11-28 18:47:45.692386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.121 [2024-11-28 18:47:45.724805] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72584 has claimed it. 00:07:16.121 [2024-11-28 18:47:45.724858] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.689 ERROR: process (pid: 72600) is no longer running 00:07:16.689 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72600) - No such process 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72584 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72584 00:07:16.689 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.946 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72584 00:07:16.946 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72584 ']' 00:07:16.946 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72584 00:07:16.946 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.946 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72584 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72584' 00:07:17.204 killing process with pid 72584 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72584 00:07:17.204 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72584 00:07:17.461 00:07:17.461 real 0m2.382s 00:07:17.461 user 0m2.553s 00:07:17.461 sys 0m0.714s 00:07:17.461 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.461 18:47:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.461 ************************************ 00:07:17.461 END TEST locking_app_on_locked_coremask 00:07:17.461 ************************************ 00:07:17.461 18:47:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.461 18:47:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.461 18:47:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.461 18:47:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.461 ************************************ 00:07:17.461 START TEST locking_overlapped_coremask 00:07:17.461 ************************************ 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72642 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72642 /var/tmp/spdk.sock 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72642 ']' 00:07:17.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.461 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.720 [2024-11-28 18:47:47.111970] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:17.720 [2024-11-28 18:47:47.112087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72642 ] 00:07:17.720 [2024-11-28 18:47:47.247746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.720 [2024-11-28 18:47:47.285860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.720 [2024-11-28 18:47:47.312845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.720 [2024-11-28 18:47:47.312850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.720 [2024-11-28 18:47:47.312955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72660 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72660 /var/tmp/spdk2.sock 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72660 /var/tmp/spdk2.sock 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72660 /var/tmp/spdk2.sock 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72660 ']' 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.659 18:47:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.659 [2024-11-28 18:47:48.047944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:18.659 [2024-11-28 18:47:48.048184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72660 ] 00:07:18.659 [2024-11-28 18:47:48.212387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.659 [2024-11-28 18:47:48.245066] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72642 has claimed it. 00:07:18.659 [2024-11-28 18:47:48.245114] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.229 ERROR: process (pid: 72660) is no longer running 00:07:19.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72660) - No such process 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72642 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72642 ']' 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72642 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72642 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72642' 00:07:19.229 killing process with pid 72642 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72642 00:07:19.229 18:47:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72642 00:07:19.489 00:07:19.489 real 0m2.049s 00:07:19.489 user 0m5.511s 00:07:19.489 sys 0m0.556s 00:07:19.489 18:47:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.489 18:47:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.489 ************************************ 00:07:19.489 END TEST locking_overlapped_coremask 00:07:19.489 ************************************ 00:07:19.749 18:47:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.749 18:47:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.749 18:47:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.749 18:47:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.749 ************************************ 00:07:19.749 START TEST locking_overlapped_coremask_via_rpc 00:07:19.749 ************************************ 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72702 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72702 /var/tmp/spdk.sock 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72702 ']' 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.749 18:47:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.749 [2024-11-28 18:47:49.237479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:19.749 [2024-11-28 18:47:49.237667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72702 ] 00:07:20.008 [2024-11-28 18:47:49.374477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.008 [2024-11-28 18:47:49.415123] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.008 [2024-11-28 18:47:49.415158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.008 [2024-11-28 18:47:49.442342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.008 [2024-11-28 18:47:49.442450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.008 [2024-11-28 18:47:49.442585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72720 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72720 /var/tmp/spdk2.sock 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72720 ']' 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.578 18:47:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 [2024-11-28 18:47:50.160833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:20.578 [2024-11-28 18:47:50.161115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72720 ] 00:07:20.837 [2024-11-28 18:47:50.307220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.837 [2024-11-28 18:47:50.342671] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.837 [2024-11-28 18:47:50.342709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.837 [2024-11-28 18:47:50.400206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.837 [2024-11-28 18:47:50.403638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.837 [2024-11-28 18:47:50.403775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.429 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 [2024-11-28 18:47:51.024614] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72702 has claimed it. 00:07:21.688 request: 00:07:21.688 { 00:07:21.688 "method": "framework_enable_cpumask_locks", 00:07:21.688 "req_id": 1 00:07:21.688 } 00:07:21.688 Got JSON-RPC error response 00:07:21.688 response: 00:07:21.688 { 00:07:21.688 "code": -32603, 00:07:21.688 "message": "Failed to claim CPU core: 2" 00:07:21.688 } 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72702 /var/tmp/spdk.sock 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72702 ']' 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72720 /var/tmp/spdk2.sock 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72720 ']' 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.688 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.946 00:07:21.946 real 0m2.349s 00:07:21.946 user 0m1.080s 00:07:21.946 sys 0m0.187s 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.946 18:47:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.946 ************************************ 00:07:21.946 END TEST locking_overlapped_coremask_via_rpc 00:07:21.946 ************************************ 00:07:21.946 18:47:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.946 18:47:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72702 ]] 00:07:21.946 18:47:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72702 00:07:21.946 18:47:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72702 ']' 00:07:21.946 18:47:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72702 00:07:21.946 18:47:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72702 00:07:22.204 killing process with pid 72702 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72702' 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72702 00:07:22.204 18:47:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72702 00:07:22.463 18:47:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72720 ]] 00:07:22.463 18:47:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72720 00:07:22.463 18:47:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72720 ']' 00:07:22.463 18:47:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72720 00:07:22.463 18:47:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.463 18:47:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.463 18:47:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72720 00:07:22.463 killing process with pid 72720 00:07:22.463 18:47:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:22.463 18:47:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:22.463 18:47:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72720' 00:07:22.463 18:47:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72720 00:07:22.463 18:47:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72720 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72702 ]] 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72702 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72702 ']' 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72702 00:07:23.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72702) - No such process 00:07:23.031 Process with pid 72702 is not found 00:07:23.031 Process with pid 72720 is not found 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72702 is not found' 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72720 ]] 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72720 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72720 ']' 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72720 00:07:23.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72720) - No such process 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72720 is not found' 00:07:23.031 18:47:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.031 00:07:23.031 real 0m18.736s 00:07:23.031 user 0m31.309s 00:07:23.031 sys 0m5.975s 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.031 18:47:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.031 ************************************ 00:07:23.031 END TEST cpu_locks 00:07:23.031 ************************************ 00:07:23.031 00:07:23.031 real 0m46.461s 00:07:23.031 user 1m29.155s 00:07:23.031 sys 0m9.742s 00:07:23.031 18:47:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.031 18:47:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.031 ************************************ 00:07:23.031 END TEST event 00:07:23.031 ************************************ 00:07:23.031 18:47:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.031 18:47:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.031 18:47:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.031 18:47:52 -- common/autotest_common.sh@10 -- # set +x 00:07:23.031 ************************************ 00:07:23.031 START TEST thread 00:07:23.031 ************************************ 00:07:23.031 18:47:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.031 * Looking for test storage... 00:07:23.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.291 18:47:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.291 18:47:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.291 18:47:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.291 18:47:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.291 18:47:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.291 18:47:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.291 18:47:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.291 18:47:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.291 18:47:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.291 18:47:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.291 18:47:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.291 18:47:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:23.291 18:47:52 thread -- scripts/common.sh@345 -- # : 1 00:07:23.291 18:47:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.291 18:47:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.291 18:47:52 thread -- scripts/common.sh@365 -- # decimal 1 00:07:23.291 18:47:52 thread -- scripts/common.sh@353 -- # local d=1 00:07:23.291 18:47:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.291 18:47:52 thread -- scripts/common.sh@355 -- # echo 1 00:07:23.291 18:47:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.291 18:47:52 thread -- scripts/common.sh@366 -- # decimal 2 00:07:23.291 18:47:52 thread -- scripts/common.sh@353 -- # local d=2 00:07:23.291 18:47:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.291 18:47:52 thread -- scripts/common.sh@355 -- # echo 2 00:07:23.291 18:47:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.291 18:47:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.291 18:47:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.291 18:47:52 thread -- scripts/common.sh@368 -- # return 0 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.291 --rc genhtml_branch_coverage=1 00:07:23.291 --rc genhtml_function_coverage=1 00:07:23.291 --rc genhtml_legend=1 00:07:23.291 --rc geninfo_all_blocks=1 00:07:23.291 --rc geninfo_unexecuted_blocks=1 00:07:23.291 00:07:23.291 ' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.291 --rc genhtml_branch_coverage=1 00:07:23.291 --rc genhtml_function_coverage=1 00:07:23.291 --rc genhtml_legend=1 00:07:23.291 --rc geninfo_all_blocks=1 00:07:23.291 --rc geninfo_unexecuted_blocks=1 00:07:23.291 00:07:23.291 ' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.291 --rc genhtml_branch_coverage=1 00:07:23.291 --rc genhtml_function_coverage=1 00:07:23.291 --rc genhtml_legend=1 00:07:23.291 --rc geninfo_all_blocks=1 00:07:23.291 --rc geninfo_unexecuted_blocks=1 00:07:23.291 00:07:23.291 ' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.291 --rc genhtml_branch_coverage=1 00:07:23.291 --rc genhtml_function_coverage=1 00:07:23.291 --rc genhtml_legend=1 00:07:23.291 --rc geninfo_all_blocks=1 00:07:23.291 --rc geninfo_unexecuted_blocks=1 00:07:23.291 00:07:23.291 ' 00:07:23.291 18:47:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.291 18:47:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.291 ************************************ 00:07:23.291 START TEST thread_poller_perf 00:07:23.291 ************************************ 00:07:23.291 18:47:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.291 [2024-11-28 18:47:52.791985] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:23.291 [2024-11-28 18:47:52.792112] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72860 ] 00:07:23.551 [2024-11-28 18:47:52.923938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.551 [2024-11-28 18:47:52.963989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.551 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:23.551 [2024-11-28 18:47:52.988819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.488 [2024-11-28T18:47:54.094Z] ====================================== 00:07:24.488 [2024-11-28T18:47:54.094Z] busy:2303349116 (cyc) 00:07:24.488 [2024-11-28T18:47:54.094Z] total_run_count: 423000 00:07:24.488 [2024-11-28T18:47:54.094Z] tsc_hz: 2294600000 (cyc) 00:07:24.488 [2024-11-28T18:47:54.094Z] ====================================== 00:07:24.488 [2024-11-28T18:47:54.094Z] poller_cost: 5445 (cyc), 2372 (nsec) 00:07:24.488 00:07:24.488 real 0m1.316s 00:07:24.488 user 0m1.116s 00:07:24.488 sys 0m0.094s 00:07:24.488 18:47:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.488 18:47:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.488 ************************************ 00:07:24.488 END TEST thread_poller_perf 00:07:24.488 ************************************ 00:07:24.747 18:47:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.747 18:47:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.747 18:47:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.747 18:47:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.747 ************************************ 00:07:24.747 START TEST thread_poller_perf 00:07:24.747 ************************************ 00:07:24.747 18:47:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.747 [2024-11-28 18:47:54.175215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:24.747 [2024-11-28 18:47:54.175664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:07:24.747 [2024-11-28 18:47:54.306617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.747 [2024-11-28 18:47:54.345316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.006 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.006 [2024-11-28 18:47:54.370501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.944 [2024-11-28T18:47:55.550Z] ====================================== 00:07:25.944 [2024-11-28T18:47:55.550Z] busy:2297782908 (cyc) 00:07:25.944 [2024-11-28T18:47:55.550Z] total_run_count: 5578000 00:07:25.944 [2024-11-28T18:47:55.550Z] tsc_hz: 2294600000 (cyc) 00:07:25.944 [2024-11-28T18:47:55.550Z] ====================================== 00:07:25.944 [2024-11-28T18:47:55.550Z] poller_cost: 411 (cyc), 179 (nsec) 00:07:25.944 00:07:25.944 real 0m1.308s 00:07:25.944 user 0m1.113s 00:07:25.944 sys 0m0.088s 00:07:25.945 18:47:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.945 18:47:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.945 ************************************ 00:07:25.945 END TEST thread_poller_perf 00:07:25.945 ************************************ 00:07:25.945 18:47:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.945 00:07:25.945 real 0m2.976s 00:07:25.945 user 0m2.387s 00:07:25.945 sys 0m0.389s 00:07:25.945 18:47:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.945 18:47:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.945 ************************************ 00:07:25.945 END TEST thread 00:07:25.945 ************************************ 00:07:25.945 18:47:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:25.945 18:47:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.205 18:47:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.205 18:47:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.205 18:47:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.205 ************************************ 00:07:26.205 START TEST app_cmdline 00:07:26.205 ************************************ 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.205 * Looking for test storage... 00:07:26.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.205 18:47:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.205 --rc genhtml_branch_coverage=1 00:07:26.205 --rc genhtml_function_coverage=1 00:07:26.205 --rc genhtml_legend=1 00:07:26.205 --rc geninfo_all_blocks=1 00:07:26.205 --rc geninfo_unexecuted_blocks=1 00:07:26.205 00:07:26.205 ' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.205 --rc genhtml_branch_coverage=1 00:07:26.205 --rc genhtml_function_coverage=1 00:07:26.205 --rc genhtml_legend=1 00:07:26.205 --rc geninfo_all_blocks=1 00:07:26.205 --rc geninfo_unexecuted_blocks=1 00:07:26.205 00:07:26.205 ' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.205 --rc genhtml_branch_coverage=1 00:07:26.205 --rc genhtml_function_coverage=1 00:07:26.205 --rc genhtml_legend=1 00:07:26.205 --rc geninfo_all_blocks=1 00:07:26.205 --rc geninfo_unexecuted_blocks=1 00:07:26.205 00:07:26.205 ' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.205 --rc genhtml_branch_coverage=1 00:07:26.205 --rc genhtml_function_coverage=1 00:07:26.205 --rc genhtml_legend=1 00:07:26.205 --rc geninfo_all_blocks=1 00:07:26.205 --rc geninfo_unexecuted_blocks=1 00:07:26.205 00:07:26.205 ' 00:07:26.205 18:47:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.205 18:47:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72980 00:07:26.205 18:47:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.205 18:47:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72980 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 72980 ']' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.205 18:47:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.465 [2024-11-28 18:47:55.881552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:26.465 [2024-11-28 18:47:55.881674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72980 ] 00:07:26.465 [2024-11-28 18:47:56.016242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.465 [2024-11-28 18:47:56.056401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.723 [2024-11-28 18:47:56.081854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.291 18:47:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.291 18:47:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:27.291 { 00:07:27.291 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:27.291 "fields": { 00:07:27.291 "major": 25, 00:07:27.291 "minor": 1, 00:07:27.291 "patch": 0, 00:07:27.291 "suffix": "-pre", 00:07:27.291 "commit": "35cd3e84d" 00:07:27.291 } 00:07:27.291 } 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:27.291 18:47:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.291 18:47:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.291 18:47:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.551 18:47:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:27.551 18:47:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:27.551 18:47:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.551 18:47:56 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.551 request: 00:07:27.551 { 00:07:27.551 "method": "env_dpdk_get_mem_stats", 00:07:27.551 "req_id": 1 00:07:27.551 } 00:07:27.551 Got JSON-RPC error response 00:07:27.551 response: 00:07:27.551 { 00:07:27.551 "code": -32601, 00:07:27.551 "message": "Method not found" 00:07:27.551 } 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.551 18:47:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72980 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 72980 ']' 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 72980 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.551 18:47:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72980 00:07:27.811 18:47:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.811 18:47:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.811 killing process with pid 72980 00:07:27.811 18:47:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72980' 00:07:27.811 18:47:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 72980 00:07:27.811 18:47:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 72980 00:07:28.071 00:07:28.071 real 0m1.987s 00:07:28.071 user 0m2.195s 00:07:28.071 sys 0m0.575s 00:07:28.071 18:47:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.071 18:47:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.071 ************************************ 00:07:28.071 END TEST app_cmdline 00:07:28.071 ************************************ 00:07:28.071 18:47:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.071 18:47:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.071 18:47:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.071 18:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:28.071 ************************************ 00:07:28.071 START TEST version 00:07:28.071 ************************************ 00:07:28.071 18:47:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.330 * Looking for test storage... 00:07:28.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.330 18:47:57 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.330 18:47:57 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.330 18:47:57 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.331 18:47:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.331 18:47:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.331 18:47:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.331 18:47:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.331 18:47:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.331 18:47:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.331 18:47:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.331 18:47:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.331 18:47:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.331 18:47:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.331 18:47:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.331 18:47:57 version -- scripts/common.sh@344 -- # case "$op" in 00:07:28.331 18:47:57 version -- scripts/common.sh@345 -- # : 1 00:07:28.331 18:47:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.331 18:47:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.331 18:47:57 version -- scripts/common.sh@365 -- # decimal 1 00:07:28.331 18:47:57 version -- scripts/common.sh@353 -- # local d=1 00:07:28.331 18:47:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.331 18:47:57 version -- scripts/common.sh@355 -- # echo 1 00:07:28.331 18:47:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.331 18:47:57 version -- scripts/common.sh@366 -- # decimal 2 00:07:28.331 18:47:57 version -- scripts/common.sh@353 -- # local d=2 00:07:28.331 18:47:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.331 18:47:57 version -- scripts/common.sh@355 -- # echo 2 00:07:28.331 18:47:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.331 18:47:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.331 18:47:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.331 18:47:57 version -- scripts/common.sh@368 -- # return 0 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.331 --rc genhtml_branch_coverage=1 00:07:28.331 --rc genhtml_function_coverage=1 00:07:28.331 --rc genhtml_legend=1 00:07:28.331 --rc geninfo_all_blocks=1 00:07:28.331 --rc geninfo_unexecuted_blocks=1 00:07:28.331 00:07:28.331 ' 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.331 --rc genhtml_branch_coverage=1 00:07:28.331 --rc genhtml_function_coverage=1 00:07:28.331 --rc genhtml_legend=1 00:07:28.331 --rc geninfo_all_blocks=1 00:07:28.331 --rc geninfo_unexecuted_blocks=1 00:07:28.331 00:07:28.331 ' 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.331 --rc genhtml_branch_coverage=1 00:07:28.331 --rc genhtml_function_coverage=1 00:07:28.331 --rc genhtml_legend=1 00:07:28.331 --rc geninfo_all_blocks=1 00:07:28.331 --rc geninfo_unexecuted_blocks=1 00:07:28.331 00:07:28.331 ' 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.331 --rc genhtml_branch_coverage=1 00:07:28.331 --rc genhtml_function_coverage=1 00:07:28.331 --rc genhtml_legend=1 00:07:28.331 --rc geninfo_all_blocks=1 00:07:28.331 --rc geninfo_unexecuted_blocks=1 00:07:28.331 00:07:28.331 ' 00:07:28.331 18:47:57 version -- app/version.sh@17 -- # get_header_version major 00:07:28.331 18:47:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # cut -f2 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.331 18:47:57 version -- app/version.sh@17 -- # major=25 00:07:28.331 18:47:57 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.331 18:47:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # cut -f2 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.331 18:47:57 version -- app/version.sh@18 -- # minor=1 00:07:28.331 18:47:57 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.331 18:47:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # cut -f2 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.331 18:47:57 version -- app/version.sh@19 -- # patch=0 00:07:28.331 18:47:57 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.331 18:47:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # cut -f2 00:07:28.331 18:47:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.331 18:47:57 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.331 18:47:57 version -- app/version.sh@22 -- # version=25.1 00:07:28.331 18:47:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.331 18:47:57 version -- app/version.sh@28 -- # version=25.1rc0 00:07:28.331 18:47:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:28.331 18:47:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.331 18:47:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:28.331 18:47:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:28.331 00:07:28.331 real 0m0.311s 00:07:28.331 user 0m0.179s 00:07:28.331 sys 0m0.192s 00:07:28.331 18:47:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.331 18:47:57 version -- common/autotest_common.sh@10 -- # set +x 00:07:28.331 ************************************ 00:07:28.331 END TEST version 00:07:28.331 ************************************ 00:07:28.591 18:47:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:28.591 18:47:57 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:28.591 18:47:57 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:28.591 18:47:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.591 18:47:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.591 18:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:28.591 ************************************ 00:07:28.591 START TEST bdev_raid 00:07:28.591 ************************************ 00:07:28.591 18:47:57 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:28.591 * Looking for test storage... 00:07:28.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:28.591 18:47:58 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.591 18:47:58 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.591 18:47:58 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.591 18:47:58 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.591 18:47:58 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.852 18:47:58 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.852 --rc genhtml_branch_coverage=1 00:07:28.852 --rc genhtml_function_coverage=1 00:07:28.852 --rc genhtml_legend=1 00:07:28.852 --rc geninfo_all_blocks=1 00:07:28.852 --rc geninfo_unexecuted_blocks=1 00:07:28.852 00:07:28.852 ' 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.852 --rc genhtml_branch_coverage=1 00:07:28.852 --rc genhtml_function_coverage=1 00:07:28.852 --rc genhtml_legend=1 00:07:28.852 --rc geninfo_all_blocks=1 00:07:28.852 --rc geninfo_unexecuted_blocks=1 00:07:28.852 00:07:28.852 ' 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.852 --rc genhtml_branch_coverage=1 00:07:28.852 --rc genhtml_function_coverage=1 00:07:28.852 --rc genhtml_legend=1 00:07:28.852 --rc geninfo_all_blocks=1 00:07:28.852 --rc geninfo_unexecuted_blocks=1 00:07:28.852 00:07:28.852 ' 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.852 --rc genhtml_branch_coverage=1 00:07:28.852 --rc genhtml_function_coverage=1 00:07:28.852 --rc genhtml_legend=1 00:07:28.852 --rc geninfo_all_blocks=1 00:07:28.852 --rc geninfo_unexecuted_blocks=1 00:07:28.852 00:07:28.852 ' 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:28.852 18:47:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:28.852 18:47:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.852 18:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.852 ************************************ 00:07:28.852 START TEST raid1_resize_data_offset_test 00:07:28.852 ************************************ 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73142 00:07:28.853 Process raid pid: 73142 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73142' 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73142 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73142 ']' 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.853 18:47:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.853 [2024-11-28 18:47:58.321968] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:28.853 [2024-11-28 18:47:58.322084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.112 [2024-11-28 18:47:58.457893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.112 [2024-11-28 18:47:58.495250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.112 [2024-11-28 18:47:58.521247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.113 [2024-11-28 18:47:58.563675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.113 [2024-11-28 18:47:58.563710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 malloc0 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 malloc1 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 null0 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 [2024-11-28 18:47:59.217619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:29.683 [2024-11-28 18:47:59.219692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.683 [2024-11-28 18:47:59.219751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:29.683 [2024-11-28 18:47:59.219891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:29.683 [2024-11-28 18:47:59.219908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:29.683 [2024-11-28 18:47:59.220209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:29.683 [2024-11-28 18:47:59.220386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:29.683 [2024-11-28 18:47:59.220400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:29.683 [2024-11-28 18:47:59.220557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.683 [2024-11-28 18:47:59.273603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.683 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.944 malloc2 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.944 [2024-11-28 18:47:59.404564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.944 [2024-11-28 18:47:59.410944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.944 [2024-11-28 18:47:59.413915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73142 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73142 ']' 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73142 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73142 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.944 killing process with pid 73142 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73142' 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73142 00:07:29.944 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73142 00:07:29.944 [2024-11-28 18:47:59.496747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.944 [2024-11-28 18:47:59.497361] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:29.944 [2024-11-28 18:47:59.497419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.944 [2024-11-28 18:47:59.497456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:29.944 [2024-11-28 18:47:59.503201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.945 [2024-11-28 18:47:59.503496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.945 [2024-11-28 18:47:59.503528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:30.205 [2024-11-28 18:47:59.712517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.466 18:47:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:30.466 00:07:30.466 real 0m1.690s 00:07:30.466 user 0m1.669s 00:07:30.466 sys 0m0.442s 00:07:30.466 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.466 18:47:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.466 ************************************ 00:07:30.466 END TEST raid1_resize_data_offset_test 00:07:30.466 ************************************ 00:07:30.466 18:47:59 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:30.466 18:47:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.466 18:47:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.466 18:47:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.466 ************************************ 00:07:30.466 START TEST raid0_resize_superblock_test 00:07:30.466 ************************************ 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:30.466 Process raid pid: 73198 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73198 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73198' 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73198 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73198 ']' 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.466 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 [2024-11-28 18:48:00.098331] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:30.726 [2024-11-28 18:48:00.098492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.726 [2024-11-28 18:48:00.241683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.726 [2024-11-28 18:48:00.283014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.726 [2024-11-28 18:48:00.307916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.986 [2024-11-28 18:48:00.350139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.986 [2024-11-28 18:48:00.350175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.556 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.556 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.556 18:48:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:31.556 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 malloc0 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 [2024-11-28 18:48:01.011008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:31.556 [2024-11-28 18:48:01.011066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.556 [2024-11-28 18:48:01.011097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.556 [2024-11-28 18:48:01.011109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.556 [2024-11-28 18:48:01.013207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.556 [2024-11-28 18:48:01.013241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:31.556 pt0 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 1d9a9fa3-9e34-4f52-b731-19bb69981ce6 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 e1a057c9-6729-4e12-a287-896bee025a19 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 [2024-11-28 18:48:01.142874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1a057c9-6729-4e12-a287-896bee025a19 is claimed 00:07:31.556 [2024-11-28 18:48:01.142952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6 is claimed 00:07:31.556 [2024-11-28 18:48:01.143059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:31.556 [2024-11-28 18:48:01.143069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:31.556 [2024-11-28 18:48:01.143397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:31.556 [2024-11-28 18:48:01.143586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:31.556 [2024-11-28 18:48:01.143630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:31.556 [2024-11-28 18:48:01.143781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:31.556 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 [2024-11-28 18:48:01.255097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 [2024-11-28 18:48:01.291040] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.815 [2024-11-28 18:48:01.291072] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e1a057c9-6729-4e12-a287-896bee025a19' was resized: old size 131072, new size 204800 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 [2024-11-28 18:48:01.302977] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.815 [2024-11-28 18:48:01.303005] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6' was resized: old size 131072, new size 204800 00:07:31.815 [2024-11-28 18:48:01.303021] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 [2024-11-28 18:48:01.395111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.815 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.076 [2024-11-28 18:48:01.423005] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:32.076 [2024-11-28 18:48:01.423084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:32.076 [2024-11-28 18:48:01.423099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.076 [2024-11-28 18:48:01.423120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:32.076 [2024-11-28 18:48:01.423238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.076 [2024-11-28 18:48:01.423277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.076 [2024-11-28 18:48:01.423285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.076 [2024-11-28 18:48:01.434912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:32.076 [2024-11-28 18:48:01.434951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.076 [2024-11-28 18:48:01.434969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:32.076 [2024-11-28 18:48:01.434977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.076 [2024-11-28 18:48:01.437038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.076 [2024-11-28 18:48:01.437072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:32.076 [2024-11-28 18:48:01.438619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e1a057c9-6729-4e12-a287-896bee025a19 00:07:32.076 [2024-11-28 18:48:01.438665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1a057c9-6729-4e12-a287-896bee025a19 is claimed 00:07:32.076 [2024-11-28 18:48:01.438758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6 00:07:32.076 [2024-11-28 18:48:01.438785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6 is claimed 00:07:32.076 [2024-11-28 18:48:01.438861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 42f8b875-e2a8-4fb4-96b8-8d6b88dd3fe6 (2) smaller than existing raid bdev Raid (3) 00:07:32.076 [2024-11-28 18:48:01.438875] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e1a057c9-6729-4e12-a287-896bee025a19: File exists 00:07:32.076 [2024-11-28 18:48:01.438921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.076 [2024-11-28 18:48:01.438927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:32.076 [2024-11-28 18:48:01.439178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:32.076 [2024-11-28 18:48:01.439325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.076 [2024-11-28 18:48:01.439338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:32.076 [2024-11-28 18:48:01.439501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.076 pt0 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.076 [2024-11-28 18:48:01.463162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73198 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73198 ']' 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73198 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73198 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.076 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.077 killing process with pid 73198 00:07:32.077 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73198' 00:07:32.077 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73198 00:07:32.077 [2024-11-28 18:48:01.543247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.077 [2024-11-28 18:48:01.543320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.077 [2024-11-28 18:48:01.543353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.077 [2024-11-28 18:48:01.543363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:32.077 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73198 00:07:32.336 [2024-11-28 18:48:01.702525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.336 18:48:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:32.336 00:07:32.336 real 0m1.918s 00:07:32.336 user 0m2.120s 00:07:32.336 sys 0m0.532s 00:07:32.336 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.336 18:48:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.336 ************************************ 00:07:32.336 END TEST raid0_resize_superblock_test 00:07:32.336 ************************************ 00:07:32.596 18:48:01 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:32.596 18:48:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.596 18:48:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.596 18:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.596 ************************************ 00:07:32.596 START TEST raid1_resize_superblock_test 00:07:32.596 ************************************ 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73268 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73268' 00:07:32.596 Process raid pid: 73268 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73268 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73268 ']' 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.596 18:48:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.596 [2024-11-28 18:48:02.080538] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:32.596 [2024-11-28 18:48:02.080656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.856 [2024-11-28 18:48:02.215726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.856 [2024-11-28 18:48:02.252418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.856 [2024-11-28 18:48:02.278792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.856 [2024-11-28 18:48:02.320809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.856 [2024-11-28 18:48:02.320847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.426 18:48:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.426 18:48:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.426 18:48:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:33.426 18:48:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.426 18:48:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.426 malloc0 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.426 [2024-11-28 18:48:03.015600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:33.426 [2024-11-28 18:48:03.015660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.426 [2024-11-28 18:48:03.015693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:33.426 [2024-11-28 18:48:03.015706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.426 [2024-11-28 18:48:03.017900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.426 [2024-11-28 18:48:03.017934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:33.426 pt0 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.426 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.697 60b202e0-fc61-49af-8a3b-ff4bcdbea835 00:07:33.697 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.697 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 09fe816e-56e6-44c2-8dae-5ccb5d505c49 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 2bd221c0-808b-44af-8786-013a99785e9e 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 [2024-11-28 18:48:03.153203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 09fe816e-56e6-44c2-8dae-5ccb5d505c49 is claimed 00:07:33.698 [2024-11-28 18:48:03.153288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2bd221c0-808b-44af-8786-013a99785e9e is claimed 00:07:33.698 [2024-11-28 18:48:03.153396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:33.698 [2024-11-28 18:48:03.153408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:33.698 [2024-11-28 18:48:03.153697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:33.698 [2024-11-28 18:48:03.153849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:33.698 [2024-11-28 18:48:03.153871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:33.698 [2024-11-28 18:48:03.153982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:33.698 [2024-11-28 18:48:03.241435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.698 [2024-11-28 18:48:03.289415] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.698 [2024-11-28 18:48:03.289471] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '09fe816e-56e6-44c2-8dae-5ccb5d505c49' was resized: old size 131072, new size 204800 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.698 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 [2024-11-28 18:48:03.301345] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.970 [2024-11-28 18:48:03.301376] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2bd221c0-808b-44af-8786-013a99785e9e' was resized: old size 131072, new size 204800 00:07:33.970 [2024-11-28 18:48:03.301394] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:33.970 [2024-11-28 18:48:03.393449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 [2024-11-28 18:48:03.433286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:33.970 [2024-11-28 18:48:03.433369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:33.970 [2024-11-28 18:48:03.433395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:33.970 [2024-11-28 18:48:03.433551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.970 [2024-11-28 18:48:03.433679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.970 [2024-11-28 18:48:03.433738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.970 [2024-11-28 18:48:03.433748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.970 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 [2024-11-28 18:48:03.445232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:33.970 [2024-11-28 18:48:03.445273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.970 [2024-11-28 18:48:03.445294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:33.970 [2024-11-28 18:48:03.445303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.970 [2024-11-28 18:48:03.447320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.970 [2024-11-28 18:48:03.447351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:33.971 [2024-11-28 18:48:03.448735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 09fe816e-56e6-44c2-8dae-5ccb5d505c49 00:07:33.971 [2024-11-28 18:48:03.448783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 09fe816e-56e6-44c2-8dae-5ccb5d505c49 is claimed 00:07:33.971 [2024-11-28 18:48:03.448872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2bd221c0-808b-44af-8786-013a99785e9e 00:07:33.971 [2024-11-28 18:48:03.448894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2bd221c0-808b-44af-8786-013a99785e9e is claimed 00:07:33.971 [2024-11-28 18:48:03.448978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2bd221c0-808b-44af-8786-013a99785e9e (2) smaller than existing raid bdev Raid (3) 00:07:33.971 [2024-11-28 18:48:03.449001] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 09fe816e-56e6-44c2-8dae-5ccb5d505c49: File exists 00:07:33.971 [2024-11-28 18:48:03.449055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.971 [2024-11-28 18:48:03.449062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:33.971 [2024-11-28 18:48:03.449306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:33.971 [2024-11-28 18:48:03.449456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.971 [2024-11-28 18:48:03.449472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:33.971 [2024-11-28 18:48:03.449617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.971 pt0 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.971 [2024-11-28 18:48:03.473798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73268 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73268 ']' 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73268 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73268 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.971 killing process with pid 73268 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73268' 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73268 00:07:33.971 [2024-11-28 18:48:03.549980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.971 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73268 00:07:33.971 [2024-11-28 18:48:03.550051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.971 [2024-11-28 18:48:03.550093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.971 [2024-11-28 18:48:03.550104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.231 [2024-11-28 18:48:03.708967] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.492 18:48:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:34.492 00:07:34.492 real 0m1.934s 00:07:34.492 user 0m2.171s 00:07:34.492 sys 0m0.490s 00:07:34.492 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.492 18:48:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.492 ************************************ 00:07:34.492 END TEST raid1_resize_superblock_test 00:07:34.492 ************************************ 00:07:34.492 18:48:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:34.492 18:48:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:34.492 18:48:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:34.492 18:48:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:34.492 18:48:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:34.492 18:48:04 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:34.492 18:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.492 18:48:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.492 18:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.492 ************************************ 00:07:34.492 START TEST raid_function_test_raid0 00:07:34.492 ************************************ 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73339 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73339' 00:07:34.492 Process raid pid: 73339 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73339 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73339 ']' 00:07:34.492 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.493 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.493 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.493 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.493 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:34.754 [2024-11-28 18:48:04.107873] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:34.754 [2024-11-28 18:48:04.108005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.754 [2024-11-28 18:48:04.242200] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.754 [2024-11-28 18:48:04.265487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.754 [2024-11-28 18:48:04.289831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.754 [2024-11-28 18:48:04.331714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.754 [2024-11-28 18:48:04.331749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.323 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.323 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:35.323 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:35.323 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.323 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.583 Base_1 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.583 Base_2 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.583 [2024-11-28 18:48:04.956706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:35.583 [2024-11-28 18:48:04.958516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:35.583 [2024-11-28 18:48:04.958577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:35.583 [2024-11-28 18:48:04.958595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:35.583 [2024-11-28 18:48:04.958838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:35.583 [2024-11-28 18:48:04.958986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:35.583 [2024-11-28 18:48:04.958999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:35.583 [2024-11-28 18:48:04.959132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.583 18:48:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:35.583 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:35.842 [2024-11-28 18:48:05.192798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:35.842 /dev/nbd0 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.842 1+0 records in 00:07:35.842 1+0 records out 00:07:35.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441822 s, 9.3 MB/s 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:35.842 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:35.843 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:36.101 { 00:07:36.101 "nbd_device": "/dev/nbd0", 00:07:36.101 "bdev_name": "raid" 00:07:36.101 } 00:07:36.101 ]' 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:36.101 { 00:07:36.101 "nbd_device": "/dev/nbd0", 00:07:36.101 "bdev_name": "raid" 00:07:36.101 } 00:07:36.101 ]' 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:36.101 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:36.102 4096+0 records in 00:07:36.102 4096+0 records out 00:07:36.102 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0330882 s, 63.4 MB/s 00:07:36.102 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:36.362 4096+0 records in 00:07:36.362 4096+0 records out 00:07:36.362 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.188487 s, 11.1 MB/s 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:36.362 128+0 records in 00:07:36.362 128+0 records out 00:07:36.362 65536 bytes (66 kB, 64 KiB) copied, 0.00119943 s, 54.6 MB/s 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:36.362 2035+0 records in 00:07:36.362 2035+0 records out 00:07:36.362 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0147703 s, 70.5 MB/s 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:36.362 456+0 records in 00:07:36.362 456+0 records out 00:07:36.362 233472 bytes (233 kB, 228 KiB) copied, 0.00398338 s, 58.6 MB/s 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.362 18:48:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:36.622 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:36.622 [2024-11-28 18:48:06.112928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.623 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73339 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73339 ']' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73339 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73339 00:07:36.883 killing process with pid 73339 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73339' 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73339 00:07:36.883 [2024-11-28 18:48:06.432398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.883 [2024-11-28 18:48:06.432506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.883 [2024-11-28 18:48:06.432560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.883 [2024-11-28 18:48:06.432569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:36.883 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73339 00:07:36.883 [2024-11-28 18:48:06.454979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.144 ************************************ 00:07:37.144 END TEST raid_function_test_raid0 00:07:37.144 ************************************ 00:07:37.144 18:48:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:37.144 00:07:37.144 real 0m2.652s 00:07:37.144 user 0m3.308s 00:07:37.144 sys 0m0.906s 00:07:37.144 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.144 18:48:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.144 18:48:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:37.144 18:48:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.144 18:48:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.144 18:48:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.144 ************************************ 00:07:37.144 START TEST raid_function_test_concat 00:07:37.144 ************************************ 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:37.144 Process raid pid: 73456 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73456 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.144 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73456' 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73456 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73456 ']' 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.404 18:48:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.404 [2024-11-28 18:48:06.827957] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:37.404 [2024-11-28 18:48:06.828068] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.404 [2024-11-28 18:48:06.963987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.404 [2024-11-28 18:48:07.001505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.664 [2024-11-28 18:48:07.026760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.664 [2024-11-28 18:48:07.068867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.664 [2024-11-28 18:48:07.068978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.234 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.234 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:38.235 Base_1 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:38.235 Base_2 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:38.235 [2024-11-28 18:48:07.681849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:38.235 [2024-11-28 18:48:07.683776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:38.235 [2024-11-28 18:48:07.683847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:38.235 [2024-11-28 18:48:07.683863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:38.235 [2024-11-28 18:48:07.684098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:38.235 [2024-11-28 18:48:07.684209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:38.235 [2024-11-28 18:48:07.684221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:38.235 [2024-11-28 18:48:07.684350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:38.235 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:38.495 [2024-11-28 18:48:07.897958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:38.495 /dev/nbd0 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.495 1+0 records in 00:07:38.495 1+0 records out 00:07:38.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004791 s, 8.5 MB/s 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.495 18:48:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.756 { 00:07:38.756 "nbd_device": "/dev/nbd0", 00:07:38.756 "bdev_name": "raid" 00:07:38.756 } 00:07:38.756 ]' 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.756 { 00:07:38.756 "nbd_device": "/dev/nbd0", 00:07:38.756 "bdev_name": "raid" 00:07:38.756 } 00:07:38.756 ]' 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:38.756 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:38.757 4096+0 records in 00:07:38.757 4096+0 records out 00:07:38.757 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0329558 s, 63.6 MB/s 00:07:38.757 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:39.017 4096+0 records in 00:07:39.017 4096+0 records out 00:07:39.017 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.177476 s, 11.8 MB/s 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:39.017 128+0 records in 00:07:39.017 128+0 records out 00:07:39.017 65536 bytes (66 kB, 64 KiB) copied, 0.00123182 s, 53.2 MB/s 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:39.017 2035+0 records in 00:07:39.017 2035+0 records out 00:07:39.017 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0139703 s, 74.6 MB/s 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:39.017 456+0 records in 00:07:39.017 456+0 records out 00:07:39.017 233472 bytes (233 kB, 228 KiB) copied, 0.00216763 s, 108 MB/s 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.017 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.276 [2024-11-28 18:48:08.797751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.276 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:39.277 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.277 18:48:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:39.277 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.277 18:48:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73456 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73456 ']' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73456 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73456 00:07:39.537 killing process with pid 73456 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73456' 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73456 00:07:39.537 [2024-11-28 18:48:09.103319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.537 [2024-11-28 18:48:09.103436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.537 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73456 00:07:39.537 [2024-11-28 18:48:09.103504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.537 [2024-11-28 18:48:09.103515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:39.537 [2024-11-28 18:48:09.125859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.797 18:48:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:39.797 00:07:39.797 real 0m2.593s 00:07:39.797 user 0m3.207s 00:07:39.797 sys 0m0.889s 00:07:39.797 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.797 18:48:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:39.797 ************************************ 00:07:39.797 END TEST raid_function_test_concat 00:07:39.797 ************************************ 00:07:39.797 18:48:09 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:39.797 18:48:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.797 18:48:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.797 18:48:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 ************************************ 00:07:40.057 START TEST raid0_resize_test 00:07:40.057 ************************************ 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73568 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73568' 00:07:40.057 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.057 Process raid pid: 73568 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73568 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73568 ']' 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.058 18:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.058 [2024-11-28 18:48:09.493037] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:40.058 [2024-11-28 18:48:09.493161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.058 [2024-11-28 18:48:09.628002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.318 [2024-11-28 18:48:09.665105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.318 [2024-11-28 18:48:09.690217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.318 [2024-11-28 18:48:09.731802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.318 [2024-11-28 18:48:09.731833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 Base_1 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 Base_2 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-11-28 18:48:10.342966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:40.888 [2024-11-28 18:48:10.344766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:40.888 [2024-11-28 18:48:10.344818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:40.888 [2024-11-28 18:48:10.344827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:40.888 [2024-11-28 18:48:10.345064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:40.888 [2024-11-28 18:48:10.345152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:40.888 [2024-11-28 18:48:10.345162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:40.888 [2024-11-28 18:48:10.345257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-11-28 18:48:10.354938] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.888 [2024-11-28 18:48:10.354959] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:40.888 true 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-11-28 18:48:10.371128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-11-28 18:48:10.414954] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.888 [2024-11-28 18:48:10.415019] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:40.888 [2024-11-28 18:48:10.415071] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:40.888 true 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-11-28 18:48:10.431132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73568 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73568 ']' 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 73568 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:40.888 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.889 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73568 00:07:41.149 killing process with pid 73568 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73568' 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 73568 00:07:41.149 [2024-11-28 18:48:10.512970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.149 [2024-11-28 18:48:10.513055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.149 [2024-11-28 18:48:10.513102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.149 [2024-11-28 18:48:10.513115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 73568 00:07:41.149 [2024-11-28 18:48:10.514557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:41.149 00:07:41.149 real 0m1.320s 00:07:41.149 user 0m1.470s 00:07:41.149 sys 0m0.311s 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.149 18:48:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.149 ************************************ 00:07:41.149 END TEST raid0_resize_test 00:07:41.149 ************************************ 00:07:41.410 18:48:10 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:41.410 18:48:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.410 18:48:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.410 18:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.410 ************************************ 00:07:41.410 START TEST raid1_resize_test 00:07:41.410 ************************************ 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73613 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73613' 00:07:41.410 Process raid pid: 73613 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73613 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73613 ']' 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.410 18:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.410 [2024-11-28 18:48:10.888467] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:41.410 [2024-11-28 18:48:10.888657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.670 [2024-11-28 18:48:11.024371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.670 [2024-11-28 18:48:11.060255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.670 [2024-11-28 18:48:11.085183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.670 [2024-11-28 18:48:11.127081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.670 [2024-11-28 18:48:11.127146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 Base_1 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 Base_2 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 [2024-11-28 18:48:11.738372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:42.240 [2024-11-28 18:48:11.740183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:42.240 [2024-11-28 18:48:11.740240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:42.240 [2024-11-28 18:48:11.740248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:42.240 [2024-11-28 18:48:11.740531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:42.240 [2024-11-28 18:48:11.740632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:42.240 [2024-11-28 18:48:11.740643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:42.240 [2024-11-28 18:48:11.740757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 [2024-11-28 18:48:11.750342] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:42.240 [2024-11-28 18:48:11.750401] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:42.240 true 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:42.240 [2024-11-28 18:48:11.762535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 [2024-11-28 18:48:11.810364] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:42.240 [2024-11-28 18:48:11.810388] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:42.240 [2024-11-28 18:48:11.810415] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:42.240 true 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.240 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:42.241 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:42.241 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.241 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.241 [2024-11-28 18:48:11.822544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.241 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73613 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73613 ']' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 73613 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73613 00:07:42.501 killing process with pid 73613 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73613' 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 73613 00:07:42.501 [2024-11-28 18:48:11.909061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.501 [2024-11-28 18:48:11.909135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.501 18:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 73613 00:07:42.501 [2024-11-28 18:48:11.909558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.501 [2024-11-28 18:48:11.909579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:42.501 [2024-11-28 18:48:11.910693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.762 18:48:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:42.762 00:07:42.762 real 0m1.323s 00:07:42.762 user 0m1.485s 00:07:42.762 sys 0m0.297s 00:07:42.762 18:48:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.762 ************************************ 00:07:42.762 END TEST raid1_resize_test 00:07:42.762 ************************************ 00:07:42.762 18:48:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.763 18:48:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:42.763 18:48:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:42.763 18:48:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:42.763 18:48:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.763 18:48:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.763 18:48:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.763 ************************************ 00:07:42.763 START TEST raid_state_function_test 00:07:42.763 ************************************ 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:42.763 Process raid pid: 73665 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73665 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73665' 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73665 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73665 ']' 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.763 18:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.763 [2024-11-28 18:48:12.292505] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:42.763 [2024-11-28 18:48:12.292624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.022 [2024-11-28 18:48:12.427853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.023 [2024-11-28 18:48:12.465314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.023 [2024-11-28 18:48:12.490640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.023 [2024-11-28 18:48:12.532617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.023 [2024-11-28 18:48:12.532647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.592 [2024-11-28 18:48:13.116184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.592 [2024-11-28 18:48:13.116241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.592 [2024-11-28 18:48:13.116271] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.592 [2024-11-28 18:48:13.116280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.592 "name": "Existed_Raid", 00:07:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.592 "strip_size_kb": 64, 00:07:43.592 "state": "configuring", 00:07:43.592 "raid_level": "raid0", 00:07:43.592 "superblock": false, 00:07:43.592 "num_base_bdevs": 2, 00:07:43.592 "num_base_bdevs_discovered": 0, 00:07:43.592 "num_base_bdevs_operational": 2, 00:07:43.592 "base_bdevs_list": [ 00:07:43.592 { 00:07:43.592 "name": "BaseBdev1", 00:07:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.592 "is_configured": false, 00:07:43.592 "data_offset": 0, 00:07:43.592 "data_size": 0 00:07:43.592 }, 00:07:43.592 { 00:07:43.592 "name": "BaseBdev2", 00:07:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.592 "is_configured": false, 00:07:43.592 "data_offset": 0, 00:07:43.592 "data_size": 0 00:07:43.592 } 00:07:43.592 ] 00:07:43.592 }' 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.592 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 [2024-11-28 18:48:13.560216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.177 [2024-11-28 18:48:13.560333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 [2024-11-28 18:48:13.568247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.177 [2024-11-28 18:48:13.568323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.177 [2024-11-28 18:48:13.568354] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.177 [2024-11-28 18:48:13.568377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 [2024-11-28 18:48:13.585110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.177 BaseBdev1 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 [ 00:07:44.177 { 00:07:44.177 "name": "BaseBdev1", 00:07:44.177 "aliases": [ 00:07:44.177 "6ee9027f-e590-4b97-a06a-cdda8114aff5" 00:07:44.177 ], 00:07:44.177 "product_name": "Malloc disk", 00:07:44.177 "block_size": 512, 00:07:44.177 "num_blocks": 65536, 00:07:44.177 "uuid": "6ee9027f-e590-4b97-a06a-cdda8114aff5", 00:07:44.177 "assigned_rate_limits": { 00:07:44.177 "rw_ios_per_sec": 0, 00:07:44.177 "rw_mbytes_per_sec": 0, 00:07:44.177 "r_mbytes_per_sec": 0, 00:07:44.177 "w_mbytes_per_sec": 0 00:07:44.177 }, 00:07:44.177 "claimed": true, 00:07:44.177 "claim_type": "exclusive_write", 00:07:44.177 "zoned": false, 00:07:44.177 "supported_io_types": { 00:07:44.177 "read": true, 00:07:44.177 "write": true, 00:07:44.177 "unmap": true, 00:07:44.177 "flush": true, 00:07:44.177 "reset": true, 00:07:44.177 "nvme_admin": false, 00:07:44.177 "nvme_io": false, 00:07:44.177 "nvme_io_md": false, 00:07:44.177 "write_zeroes": true, 00:07:44.177 "zcopy": true, 00:07:44.177 "get_zone_info": false, 00:07:44.177 "zone_management": false, 00:07:44.177 "zone_append": false, 00:07:44.177 "compare": false, 00:07:44.177 "compare_and_write": false, 00:07:44.177 "abort": true, 00:07:44.177 "seek_hole": false, 00:07:44.177 "seek_data": false, 00:07:44.177 "copy": true, 00:07:44.177 "nvme_iov_md": false 00:07:44.177 }, 00:07:44.177 "memory_domains": [ 00:07:44.177 { 00:07:44.177 "dma_device_id": "system", 00:07:44.177 "dma_device_type": 1 00:07:44.177 }, 00:07:44.177 { 00:07:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.177 "dma_device_type": 2 00:07:44.177 } 00:07:44.177 ], 00:07:44.177 "driver_specific": {} 00:07:44.177 } 00:07:44.177 ] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.177 "name": "Existed_Raid", 00:07:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.177 "strip_size_kb": 64, 00:07:44.177 "state": "configuring", 00:07:44.177 "raid_level": "raid0", 00:07:44.177 "superblock": false, 00:07:44.177 "num_base_bdevs": 2, 00:07:44.177 "num_base_bdevs_discovered": 1, 00:07:44.177 "num_base_bdevs_operational": 2, 00:07:44.177 "base_bdevs_list": [ 00:07:44.177 { 00:07:44.177 "name": "BaseBdev1", 00:07:44.177 "uuid": "6ee9027f-e590-4b97-a06a-cdda8114aff5", 00:07:44.177 "is_configured": true, 00:07:44.177 "data_offset": 0, 00:07:44.177 "data_size": 65536 00:07:44.177 }, 00:07:44.177 { 00:07:44.177 "name": "BaseBdev2", 00:07:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.177 "is_configured": false, 00:07:44.177 "data_offset": 0, 00:07:44.177 "data_size": 0 00:07:44.177 } 00:07:44.177 ] 00:07:44.177 }' 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.177 18:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.437 [2024-11-28 18:48:14.021241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.437 [2024-11-28 18:48:14.021338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.437 [2024-11-28 18:48:14.029302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.437 [2024-11-28 18:48:14.031176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.437 [2024-11-28 18:48:14.031248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.437 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.697 "name": "Existed_Raid", 00:07:44.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.697 "strip_size_kb": 64, 00:07:44.697 "state": "configuring", 00:07:44.697 "raid_level": "raid0", 00:07:44.697 "superblock": false, 00:07:44.697 "num_base_bdevs": 2, 00:07:44.697 "num_base_bdevs_discovered": 1, 00:07:44.697 "num_base_bdevs_operational": 2, 00:07:44.697 "base_bdevs_list": [ 00:07:44.697 { 00:07:44.697 "name": "BaseBdev1", 00:07:44.697 "uuid": "6ee9027f-e590-4b97-a06a-cdda8114aff5", 00:07:44.697 "is_configured": true, 00:07:44.697 "data_offset": 0, 00:07:44.697 "data_size": 65536 00:07:44.697 }, 00:07:44.697 { 00:07:44.697 "name": "BaseBdev2", 00:07:44.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.697 "is_configured": false, 00:07:44.697 "data_offset": 0, 00:07:44.697 "data_size": 0 00:07:44.697 } 00:07:44.697 ] 00:07:44.697 }' 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.697 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 [2024-11-28 18:48:14.480287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.958 [2024-11-28 18:48:14.480324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:44.958 [2024-11-28 18:48:14.480335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:44.958 [2024-11-28 18:48:14.480603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:44.958 [2024-11-28 18:48:14.480750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:44.958 [2024-11-28 18:48:14.480760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:44.958 [2024-11-28 18:48:14.480986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.958 BaseBdev2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 [ 00:07:44.958 { 00:07:44.958 "name": "BaseBdev2", 00:07:44.958 "aliases": [ 00:07:44.958 "bcdb99c1-6026-4079-8bf9-87b4901babcb" 00:07:44.958 ], 00:07:44.958 "product_name": "Malloc disk", 00:07:44.958 "block_size": 512, 00:07:44.958 "num_blocks": 65536, 00:07:44.958 "uuid": "bcdb99c1-6026-4079-8bf9-87b4901babcb", 00:07:44.958 "assigned_rate_limits": { 00:07:44.958 "rw_ios_per_sec": 0, 00:07:44.958 "rw_mbytes_per_sec": 0, 00:07:44.958 "r_mbytes_per_sec": 0, 00:07:44.958 "w_mbytes_per_sec": 0 00:07:44.958 }, 00:07:44.958 "claimed": true, 00:07:44.958 "claim_type": "exclusive_write", 00:07:44.958 "zoned": false, 00:07:44.958 "supported_io_types": { 00:07:44.958 "read": true, 00:07:44.958 "write": true, 00:07:44.958 "unmap": true, 00:07:44.958 "flush": true, 00:07:44.958 "reset": true, 00:07:44.958 "nvme_admin": false, 00:07:44.958 "nvme_io": false, 00:07:44.958 "nvme_io_md": false, 00:07:44.958 "write_zeroes": true, 00:07:44.958 "zcopy": true, 00:07:44.958 "get_zone_info": false, 00:07:44.958 "zone_management": false, 00:07:44.958 "zone_append": false, 00:07:44.958 "compare": false, 00:07:44.958 "compare_and_write": false, 00:07:44.958 "abort": true, 00:07:44.958 "seek_hole": false, 00:07:44.958 "seek_data": false, 00:07:44.958 "copy": true, 00:07:44.958 "nvme_iov_md": false 00:07:44.958 }, 00:07:44.958 "memory_domains": [ 00:07:44.958 { 00:07:44.958 "dma_device_id": "system", 00:07:44.958 "dma_device_type": 1 00:07:44.958 }, 00:07:44.958 { 00:07:44.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.958 "dma_device_type": 2 00:07:44.958 } 00:07:44.958 ], 00:07:44.958 "driver_specific": {} 00:07:44.958 } 00:07:44.958 ] 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.958 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.959 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.959 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.959 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.959 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.218 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.218 "name": "Existed_Raid", 00:07:45.218 "uuid": "a1ecd4d5-646a-4d85-b54b-c2f706587609", 00:07:45.218 "strip_size_kb": 64, 00:07:45.218 "state": "online", 00:07:45.218 "raid_level": "raid0", 00:07:45.218 "superblock": false, 00:07:45.218 "num_base_bdevs": 2, 00:07:45.218 "num_base_bdevs_discovered": 2, 00:07:45.218 "num_base_bdevs_operational": 2, 00:07:45.218 "base_bdevs_list": [ 00:07:45.218 { 00:07:45.218 "name": "BaseBdev1", 00:07:45.218 "uuid": "6ee9027f-e590-4b97-a06a-cdda8114aff5", 00:07:45.218 "is_configured": true, 00:07:45.218 "data_offset": 0, 00:07:45.218 "data_size": 65536 00:07:45.218 }, 00:07:45.218 { 00:07:45.218 "name": "BaseBdev2", 00:07:45.218 "uuid": "bcdb99c1-6026-4079-8bf9-87b4901babcb", 00:07:45.218 "is_configured": true, 00:07:45.218 "data_offset": 0, 00:07:45.218 "data_size": 65536 00:07:45.218 } 00:07:45.218 ] 00:07:45.218 }' 00:07:45.218 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.218 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.478 [2024-11-28 18:48:14.940735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.478 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.478 "name": "Existed_Raid", 00:07:45.478 "aliases": [ 00:07:45.478 "a1ecd4d5-646a-4d85-b54b-c2f706587609" 00:07:45.478 ], 00:07:45.478 "product_name": "Raid Volume", 00:07:45.478 "block_size": 512, 00:07:45.478 "num_blocks": 131072, 00:07:45.478 "uuid": "a1ecd4d5-646a-4d85-b54b-c2f706587609", 00:07:45.478 "assigned_rate_limits": { 00:07:45.478 "rw_ios_per_sec": 0, 00:07:45.478 "rw_mbytes_per_sec": 0, 00:07:45.478 "r_mbytes_per_sec": 0, 00:07:45.478 "w_mbytes_per_sec": 0 00:07:45.478 }, 00:07:45.478 "claimed": false, 00:07:45.478 "zoned": false, 00:07:45.478 "supported_io_types": { 00:07:45.478 "read": true, 00:07:45.478 "write": true, 00:07:45.478 "unmap": true, 00:07:45.478 "flush": true, 00:07:45.478 "reset": true, 00:07:45.478 "nvme_admin": false, 00:07:45.478 "nvme_io": false, 00:07:45.478 "nvme_io_md": false, 00:07:45.478 "write_zeroes": true, 00:07:45.478 "zcopy": false, 00:07:45.478 "get_zone_info": false, 00:07:45.478 "zone_management": false, 00:07:45.478 "zone_append": false, 00:07:45.478 "compare": false, 00:07:45.478 "compare_and_write": false, 00:07:45.478 "abort": false, 00:07:45.478 "seek_hole": false, 00:07:45.478 "seek_data": false, 00:07:45.478 "copy": false, 00:07:45.478 "nvme_iov_md": false 00:07:45.478 }, 00:07:45.478 "memory_domains": [ 00:07:45.478 { 00:07:45.478 "dma_device_id": "system", 00:07:45.478 "dma_device_type": 1 00:07:45.478 }, 00:07:45.478 { 00:07:45.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.478 "dma_device_type": 2 00:07:45.478 }, 00:07:45.478 { 00:07:45.478 "dma_device_id": "system", 00:07:45.478 "dma_device_type": 1 00:07:45.478 }, 00:07:45.478 { 00:07:45.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.479 "dma_device_type": 2 00:07:45.479 } 00:07:45.479 ], 00:07:45.479 "driver_specific": { 00:07:45.479 "raid": { 00:07:45.479 "uuid": "a1ecd4d5-646a-4d85-b54b-c2f706587609", 00:07:45.479 "strip_size_kb": 64, 00:07:45.479 "state": "online", 00:07:45.479 "raid_level": "raid0", 00:07:45.479 "superblock": false, 00:07:45.479 "num_base_bdevs": 2, 00:07:45.479 "num_base_bdevs_discovered": 2, 00:07:45.479 "num_base_bdevs_operational": 2, 00:07:45.479 "base_bdevs_list": [ 00:07:45.479 { 00:07:45.479 "name": "BaseBdev1", 00:07:45.479 "uuid": "6ee9027f-e590-4b97-a06a-cdda8114aff5", 00:07:45.479 "is_configured": true, 00:07:45.479 "data_offset": 0, 00:07:45.479 "data_size": 65536 00:07:45.479 }, 00:07:45.479 { 00:07:45.479 "name": "BaseBdev2", 00:07:45.479 "uuid": "bcdb99c1-6026-4079-8bf9-87b4901babcb", 00:07:45.479 "is_configured": true, 00:07:45.479 "data_offset": 0, 00:07:45.479 "data_size": 65536 00:07:45.479 } 00:07:45.479 ] 00:07:45.479 } 00:07:45.479 } 00:07:45.479 }' 00:07:45.479 18:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.479 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.479 BaseBdev2' 00:07:45.479 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.479 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.479 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.737 [2024-11-28 18:48:15.176574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.737 [2024-11-28 18:48:15.176600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.737 [2024-11-28 18:48:15.176652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.737 "name": "Existed_Raid", 00:07:45.737 "uuid": "a1ecd4d5-646a-4d85-b54b-c2f706587609", 00:07:45.737 "strip_size_kb": 64, 00:07:45.737 "state": "offline", 00:07:45.737 "raid_level": "raid0", 00:07:45.737 "superblock": false, 00:07:45.737 "num_base_bdevs": 2, 00:07:45.737 "num_base_bdevs_discovered": 1, 00:07:45.737 "num_base_bdevs_operational": 1, 00:07:45.737 "base_bdevs_list": [ 00:07:45.737 { 00:07:45.737 "name": null, 00:07:45.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.737 "is_configured": false, 00:07:45.737 "data_offset": 0, 00:07:45.737 "data_size": 65536 00:07:45.737 }, 00:07:45.737 { 00:07:45.737 "name": "BaseBdev2", 00:07:45.737 "uuid": "bcdb99c1-6026-4079-8bf9-87b4901babcb", 00:07:45.737 "is_configured": true, 00:07:45.737 "data_offset": 0, 00:07:45.737 "data_size": 65536 00:07:45.737 } 00:07:45.737 ] 00:07:45.737 }' 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.737 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.306 [2024-11-28 18:48:15.691848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.306 [2024-11-28 18:48:15.691956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73665 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73665 ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73665 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73665 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73665' 00:07:46.306 killing process with pid 73665 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73665 00:07:46.306 [2024-11-28 18:48:15.804455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.306 18:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73665 00:07:46.306 [2024-11-28 18:48:15.805470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:46.566 00:07:46.566 real 0m3.813s 00:07:46.566 user 0m6.057s 00:07:46.566 sys 0m0.754s 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.566 ************************************ 00:07:46.566 END TEST raid_state_function_test 00:07:46.566 ************************************ 00:07:46.566 18:48:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:46.566 18:48:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.566 18:48:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.566 18:48:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.566 ************************************ 00:07:46.566 START TEST raid_state_function_test_sb 00:07:46.566 ************************************ 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73907 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73907' 00:07:46.566 Process raid pid: 73907 00:07:46.566 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73907 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73907 ']' 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.567 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 [2024-11-28 18:48:16.176539] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:46.827 [2024-11-28 18:48:16.176766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.827 [2024-11-28 18:48:16.312976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.827 [2024-11-28 18:48:16.350541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.827 [2024-11-28 18:48:16.375432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.827 [2024-11-28 18:48:16.417081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.827 [2024-11-28 18:48:16.417202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.398 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.398 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:47.398 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.398 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.398 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-28 18:48:17.004923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.658 [2024-11-28 18:48:17.005047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.658 [2024-11-28 18:48:17.005081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.658 [2024-11-28 18:48:17.005103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.658 "name": "Existed_Raid", 00:07:47.658 "uuid": "70959525-4fe7-41a5-a6f8-c35c16dde879", 00:07:47.658 "strip_size_kb": 64, 00:07:47.658 "state": "configuring", 00:07:47.658 "raid_level": "raid0", 00:07:47.658 "superblock": true, 00:07:47.658 "num_base_bdevs": 2, 00:07:47.658 "num_base_bdevs_discovered": 0, 00:07:47.658 "num_base_bdevs_operational": 2, 00:07:47.658 "base_bdevs_list": [ 00:07:47.658 { 00:07:47.658 "name": "BaseBdev1", 00:07:47.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.658 "is_configured": false, 00:07:47.658 "data_offset": 0, 00:07:47.658 "data_size": 0 00:07:47.658 }, 00:07:47.658 { 00:07:47.658 "name": "BaseBdev2", 00:07:47.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.658 "is_configured": false, 00:07:47.658 "data_offset": 0, 00:07:47.658 "data_size": 0 00:07:47.658 } 00:07:47.658 ] 00:07:47.658 }' 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.658 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 [2024-11-28 18:48:17.460927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.918 [2024-11-28 18:48:17.461022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 [2024-11-28 18:48:17.472967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.918 [2024-11-28 18:48:17.473055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.918 [2024-11-28 18:48:17.473085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.918 [2024-11-28 18:48:17.473107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 [2024-11-28 18:48:17.493717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.918 BaseBdev1 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.918 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.918 [ 00:07:47.918 { 00:07:47.918 "name": "BaseBdev1", 00:07:47.918 "aliases": [ 00:07:47.918 "95698ee6-d132-4bcc-90c8-ae8622ef2bd8" 00:07:47.918 ], 00:07:47.918 "product_name": "Malloc disk", 00:07:47.918 "block_size": 512, 00:07:47.918 "num_blocks": 65536, 00:07:48.178 "uuid": "95698ee6-d132-4bcc-90c8-ae8622ef2bd8", 00:07:48.178 "assigned_rate_limits": { 00:07:48.178 "rw_ios_per_sec": 0, 00:07:48.178 "rw_mbytes_per_sec": 0, 00:07:48.178 "r_mbytes_per_sec": 0, 00:07:48.178 "w_mbytes_per_sec": 0 00:07:48.178 }, 00:07:48.178 "claimed": true, 00:07:48.178 "claim_type": "exclusive_write", 00:07:48.178 "zoned": false, 00:07:48.178 "supported_io_types": { 00:07:48.178 "read": true, 00:07:48.178 "write": true, 00:07:48.178 "unmap": true, 00:07:48.178 "flush": true, 00:07:48.178 "reset": true, 00:07:48.178 "nvme_admin": false, 00:07:48.178 "nvme_io": false, 00:07:48.178 "nvme_io_md": false, 00:07:48.178 "write_zeroes": true, 00:07:48.178 "zcopy": true, 00:07:48.178 "get_zone_info": false, 00:07:48.178 "zone_management": false, 00:07:48.178 "zone_append": false, 00:07:48.178 "compare": false, 00:07:48.178 "compare_and_write": false, 00:07:48.178 "abort": true, 00:07:48.178 "seek_hole": false, 00:07:48.178 "seek_data": false, 00:07:48.178 "copy": true, 00:07:48.178 "nvme_iov_md": false 00:07:48.178 }, 00:07:48.178 "memory_domains": [ 00:07:48.178 { 00:07:48.178 "dma_device_id": "system", 00:07:48.178 "dma_device_type": 1 00:07:48.178 }, 00:07:48.178 { 00:07:48.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.178 "dma_device_type": 2 00:07:48.178 } 00:07:48.178 ], 00:07:48.178 "driver_specific": {} 00:07:48.178 } 00:07:48.178 ] 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.178 "name": "Existed_Raid", 00:07:48.178 "uuid": "b235d6ce-0027-452e-bf78-d8bf60a75aa0", 00:07:48.178 "strip_size_kb": 64, 00:07:48.178 "state": "configuring", 00:07:48.178 "raid_level": "raid0", 00:07:48.178 "superblock": true, 00:07:48.178 "num_base_bdevs": 2, 00:07:48.178 "num_base_bdevs_discovered": 1, 00:07:48.178 "num_base_bdevs_operational": 2, 00:07:48.178 "base_bdevs_list": [ 00:07:48.178 { 00:07:48.178 "name": "BaseBdev1", 00:07:48.178 "uuid": "95698ee6-d132-4bcc-90c8-ae8622ef2bd8", 00:07:48.178 "is_configured": true, 00:07:48.178 "data_offset": 2048, 00:07:48.178 "data_size": 63488 00:07:48.178 }, 00:07:48.178 { 00:07:48.178 "name": "BaseBdev2", 00:07:48.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.178 "is_configured": false, 00:07:48.178 "data_offset": 0, 00:07:48.178 "data_size": 0 00:07:48.178 } 00:07:48.178 ] 00:07:48.178 }' 00:07:48.178 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.179 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.439 [2024-11-28 18:48:17.925875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.439 [2024-11-28 18:48:17.925984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.439 [2024-11-28 18:48:17.937907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.439 [2024-11-28 18:48:17.939790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.439 [2024-11-28 18:48:17.939829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.439 "name": "Existed_Raid", 00:07:48.439 "uuid": "1904950d-af0c-40d0-bccc-bbf09f42de5c", 00:07:48.439 "strip_size_kb": 64, 00:07:48.439 "state": "configuring", 00:07:48.439 "raid_level": "raid0", 00:07:48.439 "superblock": true, 00:07:48.439 "num_base_bdevs": 2, 00:07:48.439 "num_base_bdevs_discovered": 1, 00:07:48.439 "num_base_bdevs_operational": 2, 00:07:48.439 "base_bdevs_list": [ 00:07:48.439 { 00:07:48.439 "name": "BaseBdev1", 00:07:48.439 "uuid": "95698ee6-d132-4bcc-90c8-ae8622ef2bd8", 00:07:48.439 "is_configured": true, 00:07:48.439 "data_offset": 2048, 00:07:48.439 "data_size": 63488 00:07:48.439 }, 00:07:48.439 { 00:07:48.439 "name": "BaseBdev2", 00:07:48.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.439 "is_configured": false, 00:07:48.439 "data_offset": 0, 00:07:48.439 "data_size": 0 00:07:48.439 } 00:07:48.439 ] 00:07:48.439 }' 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.439 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.011 [2024-11-28 18:48:18.332991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.011 [2024-11-28 18:48:18.333267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:49.011 [2024-11-28 18:48:18.333319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.011 [2024-11-28 18:48:18.333606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:49.011 BaseBdev2 00:07:49.011 [2024-11-28 18:48:18.333796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:49.011 [2024-11-28 18:48:18.333817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:49.011 [2024-11-28 18:48:18.333936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.011 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.011 [ 00:07:49.011 { 00:07:49.011 "name": "BaseBdev2", 00:07:49.011 "aliases": [ 00:07:49.011 "daa9712a-19fc-4505-a051-2ff5fd891c62" 00:07:49.011 ], 00:07:49.011 "product_name": "Malloc disk", 00:07:49.011 "block_size": 512, 00:07:49.011 "num_blocks": 65536, 00:07:49.011 "uuid": "daa9712a-19fc-4505-a051-2ff5fd891c62", 00:07:49.011 "assigned_rate_limits": { 00:07:49.011 "rw_ios_per_sec": 0, 00:07:49.011 "rw_mbytes_per_sec": 0, 00:07:49.011 "r_mbytes_per_sec": 0, 00:07:49.011 "w_mbytes_per_sec": 0 00:07:49.011 }, 00:07:49.011 "claimed": true, 00:07:49.011 "claim_type": "exclusive_write", 00:07:49.011 "zoned": false, 00:07:49.011 "supported_io_types": { 00:07:49.011 "read": true, 00:07:49.011 "write": true, 00:07:49.011 "unmap": true, 00:07:49.011 "flush": true, 00:07:49.011 "reset": true, 00:07:49.011 "nvme_admin": false, 00:07:49.011 "nvme_io": false, 00:07:49.011 "nvme_io_md": false, 00:07:49.011 "write_zeroes": true, 00:07:49.011 "zcopy": true, 00:07:49.011 "get_zone_info": false, 00:07:49.011 "zone_management": false, 00:07:49.011 "zone_append": false, 00:07:49.011 "compare": false, 00:07:49.012 "compare_and_write": false, 00:07:49.012 "abort": true, 00:07:49.012 "seek_hole": false, 00:07:49.012 "seek_data": false, 00:07:49.012 "copy": true, 00:07:49.012 "nvme_iov_md": false 00:07:49.012 }, 00:07:49.012 "memory_domains": [ 00:07:49.012 { 00:07:49.012 "dma_device_id": "system", 00:07:49.012 "dma_device_type": 1 00:07:49.012 }, 00:07:49.012 { 00:07:49.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.012 "dma_device_type": 2 00:07:49.012 } 00:07:49.012 ], 00:07:49.012 "driver_specific": {} 00:07:49.012 } 00:07:49.012 ] 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.012 "name": "Existed_Raid", 00:07:49.012 "uuid": "1904950d-af0c-40d0-bccc-bbf09f42de5c", 00:07:49.012 "strip_size_kb": 64, 00:07:49.012 "state": "online", 00:07:49.012 "raid_level": "raid0", 00:07:49.012 "superblock": true, 00:07:49.012 "num_base_bdevs": 2, 00:07:49.012 "num_base_bdevs_discovered": 2, 00:07:49.012 "num_base_bdevs_operational": 2, 00:07:49.012 "base_bdevs_list": [ 00:07:49.012 { 00:07:49.012 "name": "BaseBdev1", 00:07:49.012 "uuid": "95698ee6-d132-4bcc-90c8-ae8622ef2bd8", 00:07:49.012 "is_configured": true, 00:07:49.012 "data_offset": 2048, 00:07:49.012 "data_size": 63488 00:07:49.012 }, 00:07:49.012 { 00:07:49.012 "name": "BaseBdev2", 00:07:49.012 "uuid": "daa9712a-19fc-4505-a051-2ff5fd891c62", 00:07:49.012 "is_configured": true, 00:07:49.012 "data_offset": 2048, 00:07:49.012 "data_size": 63488 00:07:49.012 } 00:07:49.012 ] 00:07:49.012 }' 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.012 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.270 [2024-11-28 18:48:18.789385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.270 "name": "Existed_Raid", 00:07:49.270 "aliases": [ 00:07:49.270 "1904950d-af0c-40d0-bccc-bbf09f42de5c" 00:07:49.270 ], 00:07:49.270 "product_name": "Raid Volume", 00:07:49.270 "block_size": 512, 00:07:49.270 "num_blocks": 126976, 00:07:49.270 "uuid": "1904950d-af0c-40d0-bccc-bbf09f42de5c", 00:07:49.270 "assigned_rate_limits": { 00:07:49.270 "rw_ios_per_sec": 0, 00:07:49.270 "rw_mbytes_per_sec": 0, 00:07:49.270 "r_mbytes_per_sec": 0, 00:07:49.270 "w_mbytes_per_sec": 0 00:07:49.270 }, 00:07:49.270 "claimed": false, 00:07:49.270 "zoned": false, 00:07:49.270 "supported_io_types": { 00:07:49.270 "read": true, 00:07:49.270 "write": true, 00:07:49.270 "unmap": true, 00:07:49.270 "flush": true, 00:07:49.270 "reset": true, 00:07:49.270 "nvme_admin": false, 00:07:49.270 "nvme_io": false, 00:07:49.270 "nvme_io_md": false, 00:07:49.270 "write_zeroes": true, 00:07:49.270 "zcopy": false, 00:07:49.270 "get_zone_info": false, 00:07:49.270 "zone_management": false, 00:07:49.270 "zone_append": false, 00:07:49.270 "compare": false, 00:07:49.270 "compare_and_write": false, 00:07:49.270 "abort": false, 00:07:49.270 "seek_hole": false, 00:07:49.270 "seek_data": false, 00:07:49.270 "copy": false, 00:07:49.270 "nvme_iov_md": false 00:07:49.270 }, 00:07:49.270 "memory_domains": [ 00:07:49.270 { 00:07:49.270 "dma_device_id": "system", 00:07:49.270 "dma_device_type": 1 00:07:49.270 }, 00:07:49.270 { 00:07:49.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.270 "dma_device_type": 2 00:07:49.270 }, 00:07:49.270 { 00:07:49.270 "dma_device_id": "system", 00:07:49.270 "dma_device_type": 1 00:07:49.270 }, 00:07:49.270 { 00:07:49.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.270 "dma_device_type": 2 00:07:49.270 } 00:07:49.270 ], 00:07:49.270 "driver_specific": { 00:07:49.270 "raid": { 00:07:49.270 "uuid": "1904950d-af0c-40d0-bccc-bbf09f42de5c", 00:07:49.270 "strip_size_kb": 64, 00:07:49.270 "state": "online", 00:07:49.270 "raid_level": "raid0", 00:07:49.270 "superblock": true, 00:07:49.270 "num_base_bdevs": 2, 00:07:49.270 "num_base_bdevs_discovered": 2, 00:07:49.270 "num_base_bdevs_operational": 2, 00:07:49.270 "base_bdevs_list": [ 00:07:49.270 { 00:07:49.270 "name": "BaseBdev1", 00:07:49.270 "uuid": "95698ee6-d132-4bcc-90c8-ae8622ef2bd8", 00:07:49.270 "is_configured": true, 00:07:49.270 "data_offset": 2048, 00:07:49.270 "data_size": 63488 00:07:49.270 }, 00:07:49.270 { 00:07:49.270 "name": "BaseBdev2", 00:07:49.270 "uuid": "daa9712a-19fc-4505-a051-2ff5fd891c62", 00:07:49.270 "is_configured": true, 00:07:49.270 "data_offset": 2048, 00:07:49.270 "data_size": 63488 00:07:49.270 } 00:07:49.270 ] 00:07:49.270 } 00:07:49.270 } 00:07:49.270 }' 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:49.270 BaseBdev2' 00:07:49.270 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.530 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.530 [2024-11-28 18:48:19.013254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:49.530 [2024-11-28 18:48:19.013321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.530 [2024-11-28 18:48:19.013378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.530 "name": "Existed_Raid", 00:07:49.530 "uuid": "1904950d-af0c-40d0-bccc-bbf09f42de5c", 00:07:49.530 "strip_size_kb": 64, 00:07:49.530 "state": "offline", 00:07:49.530 "raid_level": "raid0", 00:07:49.530 "superblock": true, 00:07:49.530 "num_base_bdevs": 2, 00:07:49.530 "num_base_bdevs_discovered": 1, 00:07:49.530 "num_base_bdevs_operational": 1, 00:07:49.530 "base_bdevs_list": [ 00:07:49.530 { 00:07:49.530 "name": null, 00:07:49.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.530 "is_configured": false, 00:07:49.530 "data_offset": 0, 00:07:49.530 "data_size": 63488 00:07:49.530 }, 00:07:49.530 { 00:07:49.530 "name": "BaseBdev2", 00:07:49.530 "uuid": "daa9712a-19fc-4505-a051-2ff5fd891c62", 00:07:49.530 "is_configured": true, 00:07:49.530 "data_offset": 2048, 00:07:49.530 "data_size": 63488 00:07:49.530 } 00:07:49.530 ] 00:07:49.530 }' 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.530 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.102 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:50.102 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.103 [2024-11-28 18:48:19.504656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:50.103 [2024-11-28 18:48:19.504755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73907 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73907 ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73907 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73907 00:07:50.103 killing process with pid 73907 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73907' 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73907 00:07:50.103 [2024-11-28 18:48:19.611608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.103 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73907 00:07:50.103 [2024-11-28 18:48:19.612606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.363 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.363 ************************************ 00:07:50.363 END TEST raid_state_function_test_sb 00:07:50.363 ************************************ 00:07:50.363 00:07:50.363 real 0m3.747s 00:07:50.363 user 0m5.925s 00:07:50.363 sys 0m0.705s 00:07:50.363 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.363 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.363 18:48:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:50.363 18:48:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:50.363 18:48:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.363 18:48:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.363 ************************************ 00:07:50.363 START TEST raid_superblock_test 00:07:50.363 ************************************ 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74137 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74137 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74137 ']' 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.363 18:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.623 [2024-11-28 18:48:19.991397] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:50.623 [2024-11-28 18:48:19.991594] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74137 ] 00:07:50.623 [2024-11-28 18:48:20.126790] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.623 [2024-11-28 18:48:20.166206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.623 [2024-11-28 18:48:20.190861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.883 [2024-11-28 18:48:20.232711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.883 [2024-11-28 18:48:20.232751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.453 malloc1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.453 [2024-11-28 18:48:20.828610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:51.453 [2024-11-28 18:48:20.828712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.453 [2024-11-28 18:48:20.828780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:51.453 [2024-11-28 18:48:20.828809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.453 [2024-11-28 18:48:20.830938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.453 [2024-11-28 18:48:20.831023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:51.453 pt1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.453 malloc2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.453 [2024-11-28 18:48:20.860921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.453 [2024-11-28 18:48:20.861022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.453 [2024-11-28 18:48:20.861056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:51.453 [2024-11-28 18:48:20.861082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.453 [2024-11-28 18:48:20.863073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.453 [2024-11-28 18:48:20.863144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.453 pt2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.453 [2024-11-28 18:48:20.872945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:51.453 [2024-11-28 18:48:20.874694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.453 [2024-11-28 18:48:20.874828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:51.453 [2024-11-28 18:48:20.874841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.453 [2024-11-28 18:48:20.875104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:51.453 [2024-11-28 18:48:20.875246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:51.453 [2024-11-28 18:48:20.875257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:51.453 [2024-11-28 18:48:20.875362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.453 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.454 "name": "raid_bdev1", 00:07:51.454 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:51.454 "strip_size_kb": 64, 00:07:51.454 "state": "online", 00:07:51.454 "raid_level": "raid0", 00:07:51.454 "superblock": true, 00:07:51.454 "num_base_bdevs": 2, 00:07:51.454 "num_base_bdevs_discovered": 2, 00:07:51.454 "num_base_bdevs_operational": 2, 00:07:51.454 "base_bdevs_list": [ 00:07:51.454 { 00:07:51.454 "name": "pt1", 00:07:51.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.454 "is_configured": true, 00:07:51.454 "data_offset": 2048, 00:07:51.454 "data_size": 63488 00:07:51.454 }, 00:07:51.454 { 00:07:51.454 "name": "pt2", 00:07:51.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.454 "is_configured": true, 00:07:51.454 "data_offset": 2048, 00:07:51.454 "data_size": 63488 00:07:51.454 } 00:07:51.454 ] 00:07:51.454 }' 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.454 18:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.713 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.713 [2024-11-28 18:48:21.313390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.973 "name": "raid_bdev1", 00:07:51.973 "aliases": [ 00:07:51.973 "799874e3-78f4-4b81-88e9-4d776936240c" 00:07:51.973 ], 00:07:51.973 "product_name": "Raid Volume", 00:07:51.973 "block_size": 512, 00:07:51.973 "num_blocks": 126976, 00:07:51.973 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:51.973 "assigned_rate_limits": { 00:07:51.973 "rw_ios_per_sec": 0, 00:07:51.973 "rw_mbytes_per_sec": 0, 00:07:51.973 "r_mbytes_per_sec": 0, 00:07:51.973 "w_mbytes_per_sec": 0 00:07:51.973 }, 00:07:51.973 "claimed": false, 00:07:51.973 "zoned": false, 00:07:51.973 "supported_io_types": { 00:07:51.973 "read": true, 00:07:51.973 "write": true, 00:07:51.973 "unmap": true, 00:07:51.973 "flush": true, 00:07:51.973 "reset": true, 00:07:51.973 "nvme_admin": false, 00:07:51.973 "nvme_io": false, 00:07:51.973 "nvme_io_md": false, 00:07:51.973 "write_zeroes": true, 00:07:51.973 "zcopy": false, 00:07:51.973 "get_zone_info": false, 00:07:51.973 "zone_management": false, 00:07:51.973 "zone_append": false, 00:07:51.973 "compare": false, 00:07:51.973 "compare_and_write": false, 00:07:51.973 "abort": false, 00:07:51.973 "seek_hole": false, 00:07:51.973 "seek_data": false, 00:07:51.973 "copy": false, 00:07:51.973 "nvme_iov_md": false 00:07:51.973 }, 00:07:51.973 "memory_domains": [ 00:07:51.973 { 00:07:51.973 "dma_device_id": "system", 00:07:51.973 "dma_device_type": 1 00:07:51.973 }, 00:07:51.973 { 00:07:51.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.973 "dma_device_type": 2 00:07:51.973 }, 00:07:51.973 { 00:07:51.973 "dma_device_id": "system", 00:07:51.973 "dma_device_type": 1 00:07:51.973 }, 00:07:51.973 { 00:07:51.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.973 "dma_device_type": 2 00:07:51.973 } 00:07:51.973 ], 00:07:51.973 "driver_specific": { 00:07:51.973 "raid": { 00:07:51.973 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:51.973 "strip_size_kb": 64, 00:07:51.973 "state": "online", 00:07:51.973 "raid_level": "raid0", 00:07:51.973 "superblock": true, 00:07:51.973 "num_base_bdevs": 2, 00:07:51.973 "num_base_bdevs_discovered": 2, 00:07:51.973 "num_base_bdevs_operational": 2, 00:07:51.973 "base_bdevs_list": [ 00:07:51.973 { 00:07:51.973 "name": "pt1", 00:07:51.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.973 "is_configured": true, 00:07:51.973 "data_offset": 2048, 00:07:51.973 "data_size": 63488 00:07:51.973 }, 00:07:51.973 { 00:07:51.973 "name": "pt2", 00:07:51.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.973 "is_configured": true, 00:07:51.973 "data_offset": 2048, 00:07:51.973 "data_size": 63488 00:07:51.973 } 00:07:51.973 ] 00:07:51.973 } 00:07:51.973 } 00:07:51.973 }' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.973 pt2' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.973 [2024-11-28 18:48:21.553330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.973 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=799874e3-78f4-4b81-88e9-4d776936240c 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 799874e3-78f4-4b81-88e9-4d776936240c ']' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 [2024-11-28 18:48:21.601113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.234 [2024-11-28 18:48:21.601136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.234 [2024-11-28 18:48:21.601214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.234 [2024-11-28 18:48:21.601264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.234 [2024-11-28 18:48:21.601280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.234 [2024-11-28 18:48:21.737175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:52.234 [2024-11-28 18:48:21.738996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:52.234 [2024-11-28 18:48:21.739050] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:52.234 [2024-11-28 18:48:21.739124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:52.234 [2024-11-28 18:48:21.739141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.234 [2024-11-28 18:48:21.739150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:52.234 request: 00:07:52.234 { 00:07:52.234 "name": "raid_bdev1", 00:07:52.234 "raid_level": "raid0", 00:07:52.234 "base_bdevs": [ 00:07:52.234 "malloc1", 00:07:52.234 "malloc2" 00:07:52.234 ], 00:07:52.234 "strip_size_kb": 64, 00:07:52.234 "superblock": false, 00:07:52.234 "method": "bdev_raid_create", 00:07:52.234 "req_id": 1 00:07:52.234 } 00:07:52.234 Got JSON-RPC error response 00:07:52.234 response: 00:07:52.234 { 00:07:52.234 "code": -17, 00:07:52.234 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:52.234 } 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:52.234 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.235 [2024-11-28 18:48:21.801169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:52.235 [2024-11-28 18:48:21.801255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.235 [2024-11-28 18:48:21.801286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:52.235 [2024-11-28 18:48:21.801317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.235 [2024-11-28 18:48:21.803447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.235 [2024-11-28 18:48:21.803515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:52.235 [2024-11-28 18:48:21.803618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:52.235 [2024-11-28 18:48:21.803678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:52.235 pt1 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.235 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.494 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.494 "name": "raid_bdev1", 00:07:52.494 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:52.494 "strip_size_kb": 64, 00:07:52.494 "state": "configuring", 00:07:52.494 "raid_level": "raid0", 00:07:52.494 "superblock": true, 00:07:52.494 "num_base_bdevs": 2, 00:07:52.494 "num_base_bdevs_discovered": 1, 00:07:52.494 "num_base_bdevs_operational": 2, 00:07:52.494 "base_bdevs_list": [ 00:07:52.494 { 00:07:52.494 "name": "pt1", 00:07:52.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.494 "is_configured": true, 00:07:52.494 "data_offset": 2048, 00:07:52.494 "data_size": 63488 00:07:52.494 }, 00:07:52.494 { 00:07:52.494 "name": null, 00:07:52.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.494 "is_configured": false, 00:07:52.494 "data_offset": 2048, 00:07:52.494 "data_size": 63488 00:07:52.494 } 00:07:52.494 ] 00:07:52.494 }' 00:07:52.494 18:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.494 18:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 [2024-11-28 18:48:22.221303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.754 [2024-11-28 18:48:22.221415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.754 [2024-11-28 18:48:22.221468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:52.754 [2024-11-28 18:48:22.221500] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.754 [2024-11-28 18:48:22.221937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.754 [2024-11-28 18:48:22.222001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.754 [2024-11-28 18:48:22.222105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.754 [2024-11-28 18:48:22.222158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.754 [2024-11-28 18:48:22.222273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:52.754 [2024-11-28 18:48:22.222313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.754 [2024-11-28 18:48:22.222590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:52.754 [2024-11-28 18:48:22.222746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:52.754 [2024-11-28 18:48:22.222758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:52.754 [2024-11-28 18:48:22.222866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.754 pt2 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.754 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.754 "name": "raid_bdev1", 00:07:52.754 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:52.754 "strip_size_kb": 64, 00:07:52.754 "state": "online", 00:07:52.755 "raid_level": "raid0", 00:07:52.755 "superblock": true, 00:07:52.755 "num_base_bdevs": 2, 00:07:52.755 "num_base_bdevs_discovered": 2, 00:07:52.755 "num_base_bdevs_operational": 2, 00:07:52.755 "base_bdevs_list": [ 00:07:52.755 { 00:07:52.755 "name": "pt1", 00:07:52.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.755 "is_configured": true, 00:07:52.755 "data_offset": 2048, 00:07:52.755 "data_size": 63488 00:07:52.755 }, 00:07:52.755 { 00:07:52.755 "name": "pt2", 00:07:52.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.755 "is_configured": true, 00:07:52.755 "data_offset": 2048, 00:07:52.755 "data_size": 63488 00:07:52.755 } 00:07:52.755 ] 00:07:52.755 }' 00:07:52.755 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.755 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.325 [2024-11-28 18:48:22.637652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.325 "name": "raid_bdev1", 00:07:53.325 "aliases": [ 00:07:53.325 "799874e3-78f4-4b81-88e9-4d776936240c" 00:07:53.325 ], 00:07:53.325 "product_name": "Raid Volume", 00:07:53.325 "block_size": 512, 00:07:53.325 "num_blocks": 126976, 00:07:53.325 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:53.325 "assigned_rate_limits": { 00:07:53.325 "rw_ios_per_sec": 0, 00:07:53.325 "rw_mbytes_per_sec": 0, 00:07:53.325 "r_mbytes_per_sec": 0, 00:07:53.325 "w_mbytes_per_sec": 0 00:07:53.325 }, 00:07:53.325 "claimed": false, 00:07:53.325 "zoned": false, 00:07:53.325 "supported_io_types": { 00:07:53.325 "read": true, 00:07:53.325 "write": true, 00:07:53.325 "unmap": true, 00:07:53.325 "flush": true, 00:07:53.325 "reset": true, 00:07:53.325 "nvme_admin": false, 00:07:53.325 "nvme_io": false, 00:07:53.325 "nvme_io_md": false, 00:07:53.325 "write_zeroes": true, 00:07:53.325 "zcopy": false, 00:07:53.325 "get_zone_info": false, 00:07:53.325 "zone_management": false, 00:07:53.325 "zone_append": false, 00:07:53.325 "compare": false, 00:07:53.325 "compare_and_write": false, 00:07:53.325 "abort": false, 00:07:53.325 "seek_hole": false, 00:07:53.325 "seek_data": false, 00:07:53.325 "copy": false, 00:07:53.325 "nvme_iov_md": false 00:07:53.325 }, 00:07:53.325 "memory_domains": [ 00:07:53.325 { 00:07:53.325 "dma_device_id": "system", 00:07:53.325 "dma_device_type": 1 00:07:53.325 }, 00:07:53.325 { 00:07:53.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.325 "dma_device_type": 2 00:07:53.325 }, 00:07:53.325 { 00:07:53.325 "dma_device_id": "system", 00:07:53.325 "dma_device_type": 1 00:07:53.325 }, 00:07:53.325 { 00:07:53.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.325 "dma_device_type": 2 00:07:53.325 } 00:07:53.325 ], 00:07:53.325 "driver_specific": { 00:07:53.325 "raid": { 00:07:53.325 "uuid": "799874e3-78f4-4b81-88e9-4d776936240c", 00:07:53.325 "strip_size_kb": 64, 00:07:53.325 "state": "online", 00:07:53.325 "raid_level": "raid0", 00:07:53.325 "superblock": true, 00:07:53.325 "num_base_bdevs": 2, 00:07:53.325 "num_base_bdevs_discovered": 2, 00:07:53.325 "num_base_bdevs_operational": 2, 00:07:53.325 "base_bdevs_list": [ 00:07:53.325 { 00:07:53.325 "name": "pt1", 00:07:53.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:53.325 "is_configured": true, 00:07:53.325 "data_offset": 2048, 00:07:53.325 "data_size": 63488 00:07:53.325 }, 00:07:53.325 { 00:07:53.325 "name": "pt2", 00:07:53.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:53.325 "is_configured": true, 00:07:53.325 "data_offset": 2048, 00:07:53.325 "data_size": 63488 00:07:53.325 } 00:07:53.325 ] 00:07:53.325 } 00:07:53.325 } 00:07:53.325 }' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:53.325 pt2' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:53.325 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.326 [2024-11-28 18:48:22.845680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 799874e3-78f4-4b81-88e9-4d776936240c '!=' 799874e3-78f4-4b81-88e9-4d776936240c ']' 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74137 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74137 ']' 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74137 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74137 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74137' 00:07:53.326 killing process with pid 74137 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74137 00:07:53.326 [2024-11-28 18:48:22.917461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.326 [2024-11-28 18:48:22.917577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.326 [2024-11-28 18:48:22.917647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.326 18:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74137 00:07:53.326 [2024-11-28 18:48:22.917695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:53.586 [2024-11-28 18:48:22.940134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.586 18:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.586 ************************************ 00:07:53.586 END TEST raid_superblock_test 00:07:53.586 ************************************ 00:07:53.586 00:07:53.586 real 0m3.255s 00:07:53.586 user 0m5.036s 00:07:53.586 sys 0m0.691s 00:07:53.586 18:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.586 18:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.857 18:48:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:53.857 18:48:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.857 18:48:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.857 18:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.857 ************************************ 00:07:53.857 START TEST raid_read_error_test 00:07:53.857 ************************************ 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NmWEZUePj6 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74343 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74343 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74343 ']' 00:07:53.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.857 18:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.857 [2024-11-28 18:48:23.328882] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:53.857 [2024-11-28 18:48:23.329022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74343 ] 00:07:54.131 [2024-11-28 18:48:23.463608] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.131 [2024-11-28 18:48:23.485966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.131 [2024-11-28 18:48:23.510831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.131 [2024-11-28 18:48:23.552719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.131 [2024-11-28 18:48:23.552758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 BaseBdev1_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 true 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 [2024-11-28 18:48:24.172765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.701 [2024-11-28 18:48:24.172866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.701 [2024-11-28 18:48:24.172905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.701 [2024-11-28 18:48:24.172918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.701 [2024-11-28 18:48:24.175061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.701 [2024-11-28 18:48:24.175108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.701 BaseBdev1 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 BaseBdev2_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 true 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 [2024-11-28 18:48:24.213238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.701 [2024-11-28 18:48:24.213287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.701 [2024-11-28 18:48:24.213302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.701 [2024-11-28 18:48:24.213313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.701 [2024-11-28 18:48:24.215464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.701 [2024-11-28 18:48:24.215541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.701 BaseBdev2 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.701 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 [2024-11-28 18:48:24.225277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.701 [2024-11-28 18:48:24.227186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.701 [2024-11-28 18:48:24.227406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:54.701 [2024-11-28 18:48:24.227435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.701 [2024-11-28 18:48:24.227682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:54.701 [2024-11-28 18:48:24.227843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:54.701 [2024-11-28 18:48:24.227853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:54.701 [2024-11-28 18:48:24.227985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.702 "name": "raid_bdev1", 00:07:54.702 "uuid": "014f7d95-f7a8-4595-b7bb-dcc8631a026f", 00:07:54.702 "strip_size_kb": 64, 00:07:54.702 "state": "online", 00:07:54.702 "raid_level": "raid0", 00:07:54.702 "superblock": true, 00:07:54.702 "num_base_bdevs": 2, 00:07:54.702 "num_base_bdevs_discovered": 2, 00:07:54.702 "num_base_bdevs_operational": 2, 00:07:54.702 "base_bdevs_list": [ 00:07:54.702 { 00:07:54.702 "name": "BaseBdev1", 00:07:54.702 "uuid": "47700c92-fc51-50b3-b22c-31a07f282881", 00:07:54.702 "is_configured": true, 00:07:54.702 "data_offset": 2048, 00:07:54.702 "data_size": 63488 00:07:54.702 }, 00:07:54.702 { 00:07:54.702 "name": "BaseBdev2", 00:07:54.702 "uuid": "2867745e-33dc-5ecb-837b-8431d59f31ce", 00:07:54.702 "is_configured": true, 00:07:54.702 "data_offset": 2048, 00:07:54.702 "data_size": 63488 00:07:54.702 } 00:07:54.702 ] 00:07:54.702 }' 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.702 18:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.270 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.270 18:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.270 [2024-11-28 18:48:24.737755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.210 "name": "raid_bdev1", 00:07:56.210 "uuid": "014f7d95-f7a8-4595-b7bb-dcc8631a026f", 00:07:56.210 "strip_size_kb": 64, 00:07:56.210 "state": "online", 00:07:56.210 "raid_level": "raid0", 00:07:56.210 "superblock": true, 00:07:56.210 "num_base_bdevs": 2, 00:07:56.210 "num_base_bdevs_discovered": 2, 00:07:56.210 "num_base_bdevs_operational": 2, 00:07:56.210 "base_bdevs_list": [ 00:07:56.210 { 00:07:56.210 "name": "BaseBdev1", 00:07:56.210 "uuid": "47700c92-fc51-50b3-b22c-31a07f282881", 00:07:56.210 "is_configured": true, 00:07:56.210 "data_offset": 2048, 00:07:56.210 "data_size": 63488 00:07:56.210 }, 00:07:56.210 { 00:07:56.210 "name": "BaseBdev2", 00:07:56.210 "uuid": "2867745e-33dc-5ecb-837b-8431d59f31ce", 00:07:56.210 "is_configured": true, 00:07:56.210 "data_offset": 2048, 00:07:56.210 "data_size": 63488 00:07:56.210 } 00:07:56.210 ] 00:07:56.210 }' 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.210 18:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.781 [2024-11-28 18:48:26.099943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.781 [2024-11-28 18:48:26.100043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.781 [2024-11-28 18:48:26.102669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.781 [2024-11-28 18:48:26.102750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.781 [2024-11-28 18:48:26.102800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.781 [2024-11-28 18:48:26.102840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:56.781 { 00:07:56.781 "results": [ 00:07:56.781 { 00:07:56.781 "job": "raid_bdev1", 00:07:56.781 "core_mask": "0x1", 00:07:56.781 "workload": "randrw", 00:07:56.781 "percentage": 50, 00:07:56.781 "status": "finished", 00:07:56.781 "queue_depth": 1, 00:07:56.781 "io_size": 131072, 00:07:56.781 "runtime": 1.360559, 00:07:56.781 "iops": 17420.045731203132, 00:07:56.781 "mibps": 2177.5057164003915, 00:07:56.781 "io_failed": 1, 00:07:56.781 "io_timeout": 0, 00:07:56.781 "avg_latency_us": 79.02401210921603, 00:07:56.781 "min_latency_us": 24.76771550597054, 00:07:56.781 "max_latency_us": 1349.5057962172057 00:07:56.781 } 00:07:56.781 ], 00:07:56.781 "core_count": 1 00:07:56.781 } 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74343 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74343 ']' 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74343 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.781 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74343 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74343' 00:07:56.782 killing process with pid 74343 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74343 00:07:56.782 [2024-11-28 18:48:26.151515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74343 00:07:56.782 [2024-11-28 18:48:26.166640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NmWEZUePj6 00:07:56.782 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:57.042 ************************************ 00:07:57.042 END TEST raid_read_error_test 00:07:57.042 ************************************ 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:57.042 00:07:57.042 real 0m3.163s 00:07:57.042 user 0m4.015s 00:07:57.042 sys 0m0.500s 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.042 18:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.042 18:48:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:57.042 18:48:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.042 18:48:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.042 18:48:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.042 ************************************ 00:07:57.042 START TEST raid_write_error_test 00:07:57.042 ************************************ 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JD1OjNMuM6 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74472 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74472 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74472 ']' 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.042 18:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.042 [2024-11-28 18:48:26.560044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:57.042 [2024-11-28 18:48:26.560250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74472 ] 00:07:57.303 [2024-11-28 18:48:26.694400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.303 [2024-11-28 18:48:26.733781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.303 [2024-11-28 18:48:26.759033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.303 [2024-11-28 18:48:26.801047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.303 [2024-11-28 18:48:26.801184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 BaseBdev1_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 true 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 [2024-11-28 18:48:27.414121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.874 [2024-11-28 18:48:27.414183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.874 [2024-11-28 18:48:27.414211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.874 [2024-11-28 18:48:27.414226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.874 [2024-11-28 18:48:27.416390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.874 [2024-11-28 18:48:27.416443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.874 BaseBdev1 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 BaseBdev2_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 true 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 [2024-11-28 18:48:27.454717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:57.874 [2024-11-28 18:48:27.454766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.874 [2024-11-28 18:48:27.454781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.874 [2024-11-28 18:48:27.454791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.874 [2024-11-28 18:48:27.456899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.874 [2024-11-28 18:48:27.456981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:57.874 BaseBdev2 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 [2024-11-28 18:48:27.466754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.874 [2024-11-28 18:48:27.468667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.874 [2024-11-28 18:48:27.468836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:57.874 [2024-11-28 18:48:27.468851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.874 [2024-11-28 18:48:27.469102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:57.874 [2024-11-28 18:48:27.469248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:57.874 [2024-11-28 18:48:27.469257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:57.874 [2024-11-28 18:48:27.469368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.874 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.134 "name": "raid_bdev1", 00:07:58.134 "uuid": "5ccc2973-2485-4eed-8c0a-8fee8303b9e2", 00:07:58.134 "strip_size_kb": 64, 00:07:58.134 "state": "online", 00:07:58.134 "raid_level": "raid0", 00:07:58.134 "superblock": true, 00:07:58.134 "num_base_bdevs": 2, 00:07:58.134 "num_base_bdevs_discovered": 2, 00:07:58.134 "num_base_bdevs_operational": 2, 00:07:58.134 "base_bdevs_list": [ 00:07:58.134 { 00:07:58.134 "name": "BaseBdev1", 00:07:58.134 "uuid": "3f0af062-7f9a-56c8-8f55-81c0ae52c58d", 00:07:58.134 "is_configured": true, 00:07:58.134 "data_offset": 2048, 00:07:58.134 "data_size": 63488 00:07:58.134 }, 00:07:58.134 { 00:07:58.134 "name": "BaseBdev2", 00:07:58.134 "uuid": "d996eb25-da7d-5da9-8322-267ff27f458a", 00:07:58.134 "is_configured": true, 00:07:58.134 "data_offset": 2048, 00:07:58.134 "data_size": 63488 00:07:58.134 } 00:07:58.134 ] 00:07:58.134 }' 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.134 18:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.393 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.393 18:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.654 [2024-11-28 18:48:28.023253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.594 18:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.594 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.594 "name": "raid_bdev1", 00:07:59.594 "uuid": "5ccc2973-2485-4eed-8c0a-8fee8303b9e2", 00:07:59.594 "strip_size_kb": 64, 00:07:59.594 "state": "online", 00:07:59.594 "raid_level": "raid0", 00:07:59.594 "superblock": true, 00:07:59.594 "num_base_bdevs": 2, 00:07:59.594 "num_base_bdevs_discovered": 2, 00:07:59.594 "num_base_bdevs_operational": 2, 00:07:59.594 "base_bdevs_list": [ 00:07:59.594 { 00:07:59.594 "name": "BaseBdev1", 00:07:59.594 "uuid": "3f0af062-7f9a-56c8-8f55-81c0ae52c58d", 00:07:59.594 "is_configured": true, 00:07:59.594 "data_offset": 2048, 00:07:59.594 "data_size": 63488 00:07:59.594 }, 00:07:59.594 { 00:07:59.594 "name": "BaseBdev2", 00:07:59.594 "uuid": "d996eb25-da7d-5da9-8322-267ff27f458a", 00:07:59.594 "is_configured": true, 00:07:59.594 "data_offset": 2048, 00:07:59.594 "data_size": 63488 00:07:59.594 } 00:07:59.594 ] 00:07:59.594 }' 00:07:59.594 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.594 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.854 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.854 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.855 [2024-11-28 18:48:29.405878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.855 [2024-11-28 18:48:29.405913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.855 [2024-11-28 18:48:29.408738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.855 [2024-11-28 18:48:29.408783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.855 [2024-11-28 18:48:29.408816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.855 [2024-11-28 18:48:29.408830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:59.855 { 00:07:59.855 "results": [ 00:07:59.855 { 00:07:59.855 "job": "raid_bdev1", 00:07:59.855 "core_mask": "0x1", 00:07:59.855 "workload": "randrw", 00:07:59.855 "percentage": 50, 00:07:59.855 "status": "finished", 00:07:59.855 "queue_depth": 1, 00:07:59.855 "io_size": 131072, 00:07:59.855 "runtime": 1.380739, 00:07:59.855 "iops": 17289.292183388752, 00:07:59.855 "mibps": 2161.161522923594, 00:07:59.855 "io_failed": 1, 00:07:59.855 "io_timeout": 0, 00:07:59.855 "avg_latency_us": 79.7166583540508, 00:07:59.855 "min_latency_us": 24.76771550597054, 00:07:59.855 "max_latency_us": 1399.4874923733985 00:07:59.855 } 00:07:59.855 ], 00:07:59.855 "core_count": 1 00:07:59.855 } 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74472 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74472 ']' 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74472 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74472 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.855 killing process with pid 74472 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74472' 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74472 00:07:59.855 [2024-11-28 18:48:29.455530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.855 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74472 00:08:00.115 [2024-11-28 18:48:29.471180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JD1OjNMuM6 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.115 ************************************ 00:08:00.115 END TEST raid_write_error_test 00:08:00.115 ************************************ 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:00.115 00:08:00.115 real 0m3.230s 00:08:00.115 user 0m4.157s 00:08:00.115 sys 0m0.492s 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.115 18:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.375 18:48:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.375 18:48:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:00.375 18:48:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.375 18:48:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.375 18:48:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.375 ************************************ 00:08:00.375 START TEST raid_state_function_test 00:08:00.375 ************************************ 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.375 Process raid pid: 74599 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74599 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74599' 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74599 00:08:00.375 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74599 ']' 00:08:00.376 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.376 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.376 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.376 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.376 18:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.376 [2024-11-28 18:48:29.853336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:00.376 [2024-11-28 18:48:29.853571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.635 [2024-11-28 18:48:29.988673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.635 [2024-11-28 18:48:30.026805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.635 [2024-11-28 18:48:30.052262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.635 [2024-11-28 18:48:30.094163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.635 [2024-11-28 18:48:30.094272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.205 [2024-11-28 18:48:30.681516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.205 [2024-11-28 18:48:30.681627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.205 [2024-11-28 18:48:30.681675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.205 [2024-11-28 18:48:30.681696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.205 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.206 "name": "Existed_Raid", 00:08:01.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.206 "strip_size_kb": 64, 00:08:01.206 "state": "configuring", 00:08:01.206 "raid_level": "concat", 00:08:01.206 "superblock": false, 00:08:01.206 "num_base_bdevs": 2, 00:08:01.206 "num_base_bdevs_discovered": 0, 00:08:01.206 "num_base_bdevs_operational": 2, 00:08:01.206 "base_bdevs_list": [ 00:08:01.206 { 00:08:01.206 "name": "BaseBdev1", 00:08:01.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.206 "is_configured": false, 00:08:01.206 "data_offset": 0, 00:08:01.206 "data_size": 0 00:08:01.206 }, 00:08:01.206 { 00:08:01.206 "name": "BaseBdev2", 00:08:01.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.206 "is_configured": false, 00:08:01.206 "data_offset": 0, 00:08:01.206 "data_size": 0 00:08:01.206 } 00:08:01.206 ] 00:08:01.206 }' 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.206 18:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 [2024-11-28 18:48:31.097515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.774 [2024-11-28 18:48:31.097607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 [2024-11-28 18:48:31.109534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.774 [2024-11-28 18:48:31.109572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.774 [2024-11-28 18:48:31.109585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.774 [2024-11-28 18:48:31.109592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 [2024-11-28 18:48:31.130284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.774 BaseBdev1 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.774 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.774 [ 00:08:01.774 { 00:08:01.774 "name": "BaseBdev1", 00:08:01.774 "aliases": [ 00:08:01.774 "58b45c5d-8257-4867-96f4-ac2f9ded1a6f" 00:08:01.774 ], 00:08:01.774 "product_name": "Malloc disk", 00:08:01.774 "block_size": 512, 00:08:01.774 "num_blocks": 65536, 00:08:01.774 "uuid": "58b45c5d-8257-4867-96f4-ac2f9ded1a6f", 00:08:01.774 "assigned_rate_limits": { 00:08:01.774 "rw_ios_per_sec": 0, 00:08:01.774 "rw_mbytes_per_sec": 0, 00:08:01.774 "r_mbytes_per_sec": 0, 00:08:01.774 "w_mbytes_per_sec": 0 00:08:01.775 }, 00:08:01.775 "claimed": true, 00:08:01.775 "claim_type": "exclusive_write", 00:08:01.775 "zoned": false, 00:08:01.775 "supported_io_types": { 00:08:01.775 "read": true, 00:08:01.775 "write": true, 00:08:01.775 "unmap": true, 00:08:01.775 "flush": true, 00:08:01.775 "reset": true, 00:08:01.775 "nvme_admin": false, 00:08:01.775 "nvme_io": false, 00:08:01.775 "nvme_io_md": false, 00:08:01.775 "write_zeroes": true, 00:08:01.775 "zcopy": true, 00:08:01.775 "get_zone_info": false, 00:08:01.775 "zone_management": false, 00:08:01.775 "zone_append": false, 00:08:01.775 "compare": false, 00:08:01.775 "compare_and_write": false, 00:08:01.775 "abort": true, 00:08:01.775 "seek_hole": false, 00:08:01.775 "seek_data": false, 00:08:01.775 "copy": true, 00:08:01.775 "nvme_iov_md": false 00:08:01.775 }, 00:08:01.775 "memory_domains": [ 00:08:01.775 { 00:08:01.775 "dma_device_id": "system", 00:08:01.775 "dma_device_type": 1 00:08:01.775 }, 00:08:01.775 { 00:08:01.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.775 "dma_device_type": 2 00:08:01.775 } 00:08:01.775 ], 00:08:01.775 "driver_specific": {} 00:08:01.775 } 00:08:01.775 ] 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.775 "name": "Existed_Raid", 00:08:01.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.775 "strip_size_kb": 64, 00:08:01.775 "state": "configuring", 00:08:01.775 "raid_level": "concat", 00:08:01.775 "superblock": false, 00:08:01.775 "num_base_bdevs": 2, 00:08:01.775 "num_base_bdevs_discovered": 1, 00:08:01.775 "num_base_bdevs_operational": 2, 00:08:01.775 "base_bdevs_list": [ 00:08:01.775 { 00:08:01.775 "name": "BaseBdev1", 00:08:01.775 "uuid": "58b45c5d-8257-4867-96f4-ac2f9ded1a6f", 00:08:01.775 "is_configured": true, 00:08:01.775 "data_offset": 0, 00:08:01.775 "data_size": 65536 00:08:01.775 }, 00:08:01.775 { 00:08:01.775 "name": "BaseBdev2", 00:08:01.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.775 "is_configured": false, 00:08:01.775 "data_offset": 0, 00:08:01.775 "data_size": 0 00:08:01.775 } 00:08:01.775 ] 00:08:01.775 }' 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.775 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 [2024-11-28 18:48:31.566413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.034 [2024-11-28 18:48:31.566472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 [2024-11-28 18:48:31.578465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.034 [2024-11-28 18:48:31.580263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.034 [2024-11-28 18:48:31.580305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.034 "name": "Existed_Raid", 00:08:02.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.034 "strip_size_kb": 64, 00:08:02.034 "state": "configuring", 00:08:02.034 "raid_level": "concat", 00:08:02.034 "superblock": false, 00:08:02.034 "num_base_bdevs": 2, 00:08:02.034 "num_base_bdevs_discovered": 1, 00:08:02.034 "num_base_bdevs_operational": 2, 00:08:02.034 "base_bdevs_list": [ 00:08:02.034 { 00:08:02.034 "name": "BaseBdev1", 00:08:02.034 "uuid": "58b45c5d-8257-4867-96f4-ac2f9ded1a6f", 00:08:02.034 "is_configured": true, 00:08:02.034 "data_offset": 0, 00:08:02.034 "data_size": 65536 00:08:02.034 }, 00:08:02.034 { 00:08:02.034 "name": "BaseBdev2", 00:08:02.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.034 "is_configured": false, 00:08:02.034 "data_offset": 0, 00:08:02.034 "data_size": 0 00:08:02.034 } 00:08:02.034 ] 00:08:02.034 }' 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.034 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 18:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.603 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.603 18:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 [2024-11-28 18:48:32.001507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.603 [2024-11-28 18:48:32.001609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:02.603 [2024-11-28 18:48:32.001637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:02.603 [2024-11-28 18:48:32.001934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:02.603 [2024-11-28 18:48:32.002123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:02.603 [2024-11-28 18:48:32.002166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:02.603 [2024-11-28 18:48:32.002411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.603 BaseBdev2 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.603 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 [ 00:08:02.604 { 00:08:02.604 "name": "BaseBdev2", 00:08:02.604 "aliases": [ 00:08:02.604 "31dd2b8f-2987-4a45-89da-eabcace44187" 00:08:02.604 ], 00:08:02.604 "product_name": "Malloc disk", 00:08:02.604 "block_size": 512, 00:08:02.604 "num_blocks": 65536, 00:08:02.604 "uuid": "31dd2b8f-2987-4a45-89da-eabcace44187", 00:08:02.604 "assigned_rate_limits": { 00:08:02.604 "rw_ios_per_sec": 0, 00:08:02.604 "rw_mbytes_per_sec": 0, 00:08:02.604 "r_mbytes_per_sec": 0, 00:08:02.604 "w_mbytes_per_sec": 0 00:08:02.604 }, 00:08:02.604 "claimed": true, 00:08:02.604 "claim_type": "exclusive_write", 00:08:02.604 "zoned": false, 00:08:02.604 "supported_io_types": { 00:08:02.604 "read": true, 00:08:02.604 "write": true, 00:08:02.604 "unmap": true, 00:08:02.604 "flush": true, 00:08:02.604 "reset": true, 00:08:02.604 "nvme_admin": false, 00:08:02.604 "nvme_io": false, 00:08:02.604 "nvme_io_md": false, 00:08:02.604 "write_zeroes": true, 00:08:02.604 "zcopy": true, 00:08:02.604 "get_zone_info": false, 00:08:02.604 "zone_management": false, 00:08:02.604 "zone_append": false, 00:08:02.604 "compare": false, 00:08:02.604 "compare_and_write": false, 00:08:02.604 "abort": true, 00:08:02.604 "seek_hole": false, 00:08:02.604 "seek_data": false, 00:08:02.604 "copy": true, 00:08:02.604 "nvme_iov_md": false 00:08:02.604 }, 00:08:02.604 "memory_domains": [ 00:08:02.604 { 00:08:02.604 "dma_device_id": "system", 00:08:02.604 "dma_device_type": 1 00:08:02.604 }, 00:08:02.604 { 00:08:02.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.604 "dma_device_type": 2 00:08:02.604 } 00:08:02.604 ], 00:08:02.604 "driver_specific": {} 00:08:02.604 } 00:08:02.604 ] 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.604 "name": "Existed_Raid", 00:08:02.604 "uuid": "5dafc7c8-486e-4a1c-b012-7378dfca479f", 00:08:02.604 "strip_size_kb": 64, 00:08:02.604 "state": "online", 00:08:02.604 "raid_level": "concat", 00:08:02.604 "superblock": false, 00:08:02.604 "num_base_bdevs": 2, 00:08:02.604 "num_base_bdevs_discovered": 2, 00:08:02.604 "num_base_bdevs_operational": 2, 00:08:02.604 "base_bdevs_list": [ 00:08:02.604 { 00:08:02.604 "name": "BaseBdev1", 00:08:02.604 "uuid": "58b45c5d-8257-4867-96f4-ac2f9ded1a6f", 00:08:02.604 "is_configured": true, 00:08:02.604 "data_offset": 0, 00:08:02.604 "data_size": 65536 00:08:02.604 }, 00:08:02.604 { 00:08:02.604 "name": "BaseBdev2", 00:08:02.604 "uuid": "31dd2b8f-2987-4a45-89da-eabcace44187", 00:08:02.604 "is_configured": true, 00:08:02.604 "data_offset": 0, 00:08:02.604 "data_size": 65536 00:08:02.604 } 00:08:02.604 ] 00:08:02.604 }' 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.604 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.863 [2024-11-28 18:48:32.429941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.863 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.864 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.864 "name": "Existed_Raid", 00:08:02.864 "aliases": [ 00:08:02.864 "5dafc7c8-486e-4a1c-b012-7378dfca479f" 00:08:02.864 ], 00:08:02.864 "product_name": "Raid Volume", 00:08:02.864 "block_size": 512, 00:08:02.864 "num_blocks": 131072, 00:08:02.864 "uuid": "5dafc7c8-486e-4a1c-b012-7378dfca479f", 00:08:02.864 "assigned_rate_limits": { 00:08:02.864 "rw_ios_per_sec": 0, 00:08:02.864 "rw_mbytes_per_sec": 0, 00:08:02.864 "r_mbytes_per_sec": 0, 00:08:02.864 "w_mbytes_per_sec": 0 00:08:02.864 }, 00:08:02.864 "claimed": false, 00:08:02.864 "zoned": false, 00:08:02.864 "supported_io_types": { 00:08:02.864 "read": true, 00:08:02.864 "write": true, 00:08:02.864 "unmap": true, 00:08:02.864 "flush": true, 00:08:02.864 "reset": true, 00:08:02.864 "nvme_admin": false, 00:08:02.864 "nvme_io": false, 00:08:02.864 "nvme_io_md": false, 00:08:02.864 "write_zeroes": true, 00:08:02.864 "zcopy": false, 00:08:02.864 "get_zone_info": false, 00:08:02.864 "zone_management": false, 00:08:02.864 "zone_append": false, 00:08:02.864 "compare": false, 00:08:02.864 "compare_and_write": false, 00:08:02.864 "abort": false, 00:08:02.864 "seek_hole": false, 00:08:02.864 "seek_data": false, 00:08:02.864 "copy": false, 00:08:02.864 "nvme_iov_md": false 00:08:02.864 }, 00:08:02.864 "memory_domains": [ 00:08:02.864 { 00:08:02.864 "dma_device_id": "system", 00:08:02.864 "dma_device_type": 1 00:08:02.864 }, 00:08:02.864 { 00:08:02.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.864 "dma_device_type": 2 00:08:02.864 }, 00:08:02.864 { 00:08:02.864 "dma_device_id": "system", 00:08:02.864 "dma_device_type": 1 00:08:02.864 }, 00:08:02.864 { 00:08:02.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.864 "dma_device_type": 2 00:08:02.864 } 00:08:02.864 ], 00:08:02.864 "driver_specific": { 00:08:02.864 "raid": { 00:08:02.864 "uuid": "5dafc7c8-486e-4a1c-b012-7378dfca479f", 00:08:02.864 "strip_size_kb": 64, 00:08:02.864 "state": "online", 00:08:02.864 "raid_level": "concat", 00:08:02.864 "superblock": false, 00:08:02.864 "num_base_bdevs": 2, 00:08:02.864 "num_base_bdevs_discovered": 2, 00:08:02.864 "num_base_bdevs_operational": 2, 00:08:02.864 "base_bdevs_list": [ 00:08:02.864 { 00:08:02.864 "name": "BaseBdev1", 00:08:02.864 "uuid": "58b45c5d-8257-4867-96f4-ac2f9ded1a6f", 00:08:02.864 "is_configured": true, 00:08:02.864 "data_offset": 0, 00:08:02.864 "data_size": 65536 00:08:02.864 }, 00:08:02.864 { 00:08:02.864 "name": "BaseBdev2", 00:08:02.864 "uuid": "31dd2b8f-2987-4a45-89da-eabcace44187", 00:08:02.864 "is_configured": true, 00:08:02.864 "data_offset": 0, 00:08:02.864 "data_size": 65536 00:08:02.864 } 00:08:02.864 ] 00:08:02.864 } 00:08:02.864 } 00:08:02.864 }' 00:08:02.864 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.123 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.123 BaseBdev2' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.124 [2024-11-28 18:48:32.629792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.124 [2024-11-28 18:48:32.629820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.124 [2024-11-28 18:48:32.629886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.124 "name": "Existed_Raid", 00:08:03.124 "uuid": "5dafc7c8-486e-4a1c-b012-7378dfca479f", 00:08:03.124 "strip_size_kb": 64, 00:08:03.124 "state": "offline", 00:08:03.124 "raid_level": "concat", 00:08:03.124 "superblock": false, 00:08:03.124 "num_base_bdevs": 2, 00:08:03.124 "num_base_bdevs_discovered": 1, 00:08:03.124 "num_base_bdevs_operational": 1, 00:08:03.124 "base_bdevs_list": [ 00:08:03.124 { 00:08:03.124 "name": null, 00:08:03.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.124 "is_configured": false, 00:08:03.124 "data_offset": 0, 00:08:03.124 "data_size": 65536 00:08:03.124 }, 00:08:03.124 { 00:08:03.124 "name": "BaseBdev2", 00:08:03.124 "uuid": "31dd2b8f-2987-4a45-89da-eabcace44187", 00:08:03.124 "is_configured": true, 00:08:03.124 "data_offset": 0, 00:08:03.124 "data_size": 65536 00:08:03.124 } 00:08:03.124 ] 00:08:03.124 }' 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.124 18:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.692 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.693 [2024-11-28 18:48:33.145178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.693 [2024-11-28 18:48:33.145286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74599 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74599 ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74599 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74599 00:08:03.693 killing process with pid 74599 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74599' 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74599 00:08:03.693 [2024-11-28 18:48:33.239162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.693 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74599 00:08:03.693 [2024-11-28 18:48:33.240148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:03.953 00:08:03.953 real 0m3.698s 00:08:03.953 user 0m5.836s 00:08:03.953 sys 0m0.714s 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.953 ************************************ 00:08:03.953 END TEST raid_state_function_test 00:08:03.953 ************************************ 00:08:03.953 18:48:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:03.953 18:48:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.953 18:48:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.953 18:48:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.953 ************************************ 00:08:03.953 START TEST raid_state_function_test_sb 00:08:03.953 ************************************ 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:03.953 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74836 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74836' 00:08:03.954 Process raid pid: 74836 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74836 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74836 ']' 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.954 18:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.214 [2024-11-28 18:48:33.623917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:04.214 [2024-11-28 18:48:33.624102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.214 [2024-11-28 18:48:33.760045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.214 [2024-11-28 18:48:33.794687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.474 [2024-11-28 18:48:33.820530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.474 [2024-11-28 18:48:33.862528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.474 [2024-11-28 18:48:33.862633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.070 [2024-11-28 18:48:34.450181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.070 [2024-11-28 18:48:34.450238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.070 [2024-11-28 18:48:34.450267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.070 [2024-11-28 18:48:34.450275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.070 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.071 "name": "Existed_Raid", 00:08:05.071 "uuid": "728dafbf-b641-4102-85fa-4a397d5fda4d", 00:08:05.071 "strip_size_kb": 64, 00:08:05.071 "state": "configuring", 00:08:05.071 "raid_level": "concat", 00:08:05.071 "superblock": true, 00:08:05.071 "num_base_bdevs": 2, 00:08:05.071 "num_base_bdevs_discovered": 0, 00:08:05.071 "num_base_bdevs_operational": 2, 00:08:05.071 "base_bdevs_list": [ 00:08:05.071 { 00:08:05.071 "name": "BaseBdev1", 00:08:05.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.071 "is_configured": false, 00:08:05.071 "data_offset": 0, 00:08:05.071 "data_size": 0 00:08:05.071 }, 00:08:05.071 { 00:08:05.071 "name": "BaseBdev2", 00:08:05.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.071 "is_configured": false, 00:08:05.071 "data_offset": 0, 00:08:05.071 "data_size": 0 00:08:05.071 } 00:08:05.071 ] 00:08:05.071 }' 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.071 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 [2024-11-28 18:48:34.862192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.331 [2024-11-28 18:48:34.862270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 [2024-11-28 18:48:34.874221] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.331 [2024-11-28 18:48:34.874290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.331 [2024-11-28 18:48:34.874337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.331 [2024-11-28 18:48:34.874357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 [2024-11-28 18:48:34.895116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.331 BaseBdev1 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.331 [ 00:08:05.331 { 00:08:05.331 "name": "BaseBdev1", 00:08:05.331 "aliases": [ 00:08:05.331 "5b9e3c77-d98d-4aa5-a59a-af4490db2015" 00:08:05.331 ], 00:08:05.331 "product_name": "Malloc disk", 00:08:05.331 "block_size": 512, 00:08:05.331 "num_blocks": 65536, 00:08:05.331 "uuid": "5b9e3c77-d98d-4aa5-a59a-af4490db2015", 00:08:05.331 "assigned_rate_limits": { 00:08:05.331 "rw_ios_per_sec": 0, 00:08:05.331 "rw_mbytes_per_sec": 0, 00:08:05.331 "r_mbytes_per_sec": 0, 00:08:05.331 "w_mbytes_per_sec": 0 00:08:05.331 }, 00:08:05.331 "claimed": true, 00:08:05.331 "claim_type": "exclusive_write", 00:08:05.331 "zoned": false, 00:08:05.331 "supported_io_types": { 00:08:05.331 "read": true, 00:08:05.331 "write": true, 00:08:05.331 "unmap": true, 00:08:05.331 "flush": true, 00:08:05.331 "reset": true, 00:08:05.331 "nvme_admin": false, 00:08:05.331 "nvme_io": false, 00:08:05.331 "nvme_io_md": false, 00:08:05.331 "write_zeroes": true, 00:08:05.331 "zcopy": true, 00:08:05.331 "get_zone_info": false, 00:08:05.331 "zone_management": false, 00:08:05.331 "zone_append": false, 00:08:05.331 "compare": false, 00:08:05.331 "compare_and_write": false, 00:08:05.331 "abort": true, 00:08:05.331 "seek_hole": false, 00:08:05.331 "seek_data": false, 00:08:05.331 "copy": true, 00:08:05.331 "nvme_iov_md": false 00:08:05.331 }, 00:08:05.331 "memory_domains": [ 00:08:05.331 { 00:08:05.331 "dma_device_id": "system", 00:08:05.331 "dma_device_type": 1 00:08:05.331 }, 00:08:05.331 { 00:08:05.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.331 "dma_device_type": 2 00:08:05.331 } 00:08:05.331 ], 00:08:05.331 "driver_specific": {} 00:08:05.331 } 00:08:05.331 ] 00:08:05.331 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.592 "name": "Existed_Raid", 00:08:05.592 "uuid": "f46fb30e-a60a-4187-9410-cb9f3ed0678e", 00:08:05.592 "strip_size_kb": 64, 00:08:05.592 "state": "configuring", 00:08:05.592 "raid_level": "concat", 00:08:05.592 "superblock": true, 00:08:05.592 "num_base_bdevs": 2, 00:08:05.592 "num_base_bdevs_discovered": 1, 00:08:05.592 "num_base_bdevs_operational": 2, 00:08:05.592 "base_bdevs_list": [ 00:08:05.592 { 00:08:05.592 "name": "BaseBdev1", 00:08:05.592 "uuid": "5b9e3c77-d98d-4aa5-a59a-af4490db2015", 00:08:05.592 "is_configured": true, 00:08:05.592 "data_offset": 2048, 00:08:05.592 "data_size": 63488 00:08:05.592 }, 00:08:05.592 { 00:08:05.592 "name": "BaseBdev2", 00:08:05.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.592 "is_configured": false, 00:08:05.592 "data_offset": 0, 00:08:05.592 "data_size": 0 00:08:05.592 } 00:08:05.592 ] 00:08:05.592 }' 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.592 18:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.856 [2024-11-28 18:48:35.367295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.856 [2024-11-28 18:48:35.367345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.856 [2024-11-28 18:48:35.375346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.856 [2024-11-28 18:48:35.377100] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.856 [2024-11-28 18:48:35.377139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.856 "name": "Existed_Raid", 00:08:05.856 "uuid": "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe", 00:08:05.856 "strip_size_kb": 64, 00:08:05.856 "state": "configuring", 00:08:05.856 "raid_level": "concat", 00:08:05.856 "superblock": true, 00:08:05.856 "num_base_bdevs": 2, 00:08:05.856 "num_base_bdevs_discovered": 1, 00:08:05.856 "num_base_bdevs_operational": 2, 00:08:05.856 "base_bdevs_list": [ 00:08:05.856 { 00:08:05.856 "name": "BaseBdev1", 00:08:05.856 "uuid": "5b9e3c77-d98d-4aa5-a59a-af4490db2015", 00:08:05.856 "is_configured": true, 00:08:05.856 "data_offset": 2048, 00:08:05.856 "data_size": 63488 00:08:05.856 }, 00:08:05.856 { 00:08:05.856 "name": "BaseBdev2", 00:08:05.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.856 "is_configured": false, 00:08:05.856 "data_offset": 0, 00:08:05.856 "data_size": 0 00:08:05.856 } 00:08:05.856 ] 00:08:05.856 }' 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.856 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.425 [2024-11-28 18:48:35.790407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.425 [2024-11-28 18:48:35.790705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:06.425 [2024-11-28 18:48:35.790762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.425 BaseBdev2 00:08:06.425 [2024-11-28 18:48:35.791052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:06.425 [2024-11-28 18:48:35.791235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:06.425 [2024-11-28 18:48:35.791293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:06.425 [2024-11-28 18:48:35.791506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.425 [ 00:08:06.425 { 00:08:06.425 "name": "BaseBdev2", 00:08:06.425 "aliases": [ 00:08:06.425 "f9ed6b99-e650-42b2-8431-92c7589fff5a" 00:08:06.425 ], 00:08:06.425 "product_name": "Malloc disk", 00:08:06.425 "block_size": 512, 00:08:06.425 "num_blocks": 65536, 00:08:06.425 "uuid": "f9ed6b99-e650-42b2-8431-92c7589fff5a", 00:08:06.425 "assigned_rate_limits": { 00:08:06.425 "rw_ios_per_sec": 0, 00:08:06.425 "rw_mbytes_per_sec": 0, 00:08:06.425 "r_mbytes_per_sec": 0, 00:08:06.425 "w_mbytes_per_sec": 0 00:08:06.425 }, 00:08:06.425 "claimed": true, 00:08:06.425 "claim_type": "exclusive_write", 00:08:06.425 "zoned": false, 00:08:06.425 "supported_io_types": { 00:08:06.425 "read": true, 00:08:06.425 "write": true, 00:08:06.425 "unmap": true, 00:08:06.425 "flush": true, 00:08:06.425 "reset": true, 00:08:06.425 "nvme_admin": false, 00:08:06.425 "nvme_io": false, 00:08:06.425 "nvme_io_md": false, 00:08:06.425 "write_zeroes": true, 00:08:06.425 "zcopy": true, 00:08:06.425 "get_zone_info": false, 00:08:06.425 "zone_management": false, 00:08:06.425 "zone_append": false, 00:08:06.425 "compare": false, 00:08:06.425 "compare_and_write": false, 00:08:06.425 "abort": true, 00:08:06.425 "seek_hole": false, 00:08:06.425 "seek_data": false, 00:08:06.425 "copy": true, 00:08:06.425 "nvme_iov_md": false 00:08:06.425 }, 00:08:06.425 "memory_domains": [ 00:08:06.425 { 00:08:06.425 "dma_device_id": "system", 00:08:06.425 "dma_device_type": 1 00:08:06.425 }, 00:08:06.425 { 00:08:06.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.425 "dma_device_type": 2 00:08:06.425 } 00:08:06.425 ], 00:08:06.425 "driver_specific": {} 00:08:06.425 } 00:08:06.425 ] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.425 "name": "Existed_Raid", 00:08:06.425 "uuid": "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe", 00:08:06.425 "strip_size_kb": 64, 00:08:06.425 "state": "online", 00:08:06.425 "raid_level": "concat", 00:08:06.425 "superblock": true, 00:08:06.425 "num_base_bdevs": 2, 00:08:06.425 "num_base_bdevs_discovered": 2, 00:08:06.425 "num_base_bdevs_operational": 2, 00:08:06.425 "base_bdevs_list": [ 00:08:06.425 { 00:08:06.425 "name": "BaseBdev1", 00:08:06.425 "uuid": "5b9e3c77-d98d-4aa5-a59a-af4490db2015", 00:08:06.425 "is_configured": true, 00:08:06.425 "data_offset": 2048, 00:08:06.425 "data_size": 63488 00:08:06.425 }, 00:08:06.425 { 00:08:06.425 "name": "BaseBdev2", 00:08:06.425 "uuid": "f9ed6b99-e650-42b2-8431-92c7589fff5a", 00:08:06.425 "is_configured": true, 00:08:06.425 "data_offset": 2048, 00:08:06.425 "data_size": 63488 00:08:06.425 } 00:08:06.425 ] 00:08:06.425 }' 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.425 18:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.685 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.686 [2024-11-28 18:48:36.246876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.686 "name": "Existed_Raid", 00:08:06.686 "aliases": [ 00:08:06.686 "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe" 00:08:06.686 ], 00:08:06.686 "product_name": "Raid Volume", 00:08:06.686 "block_size": 512, 00:08:06.686 "num_blocks": 126976, 00:08:06.686 "uuid": "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe", 00:08:06.686 "assigned_rate_limits": { 00:08:06.686 "rw_ios_per_sec": 0, 00:08:06.686 "rw_mbytes_per_sec": 0, 00:08:06.686 "r_mbytes_per_sec": 0, 00:08:06.686 "w_mbytes_per_sec": 0 00:08:06.686 }, 00:08:06.686 "claimed": false, 00:08:06.686 "zoned": false, 00:08:06.686 "supported_io_types": { 00:08:06.686 "read": true, 00:08:06.686 "write": true, 00:08:06.686 "unmap": true, 00:08:06.686 "flush": true, 00:08:06.686 "reset": true, 00:08:06.686 "nvme_admin": false, 00:08:06.686 "nvme_io": false, 00:08:06.686 "nvme_io_md": false, 00:08:06.686 "write_zeroes": true, 00:08:06.686 "zcopy": false, 00:08:06.686 "get_zone_info": false, 00:08:06.686 "zone_management": false, 00:08:06.686 "zone_append": false, 00:08:06.686 "compare": false, 00:08:06.686 "compare_and_write": false, 00:08:06.686 "abort": false, 00:08:06.686 "seek_hole": false, 00:08:06.686 "seek_data": false, 00:08:06.686 "copy": false, 00:08:06.686 "nvme_iov_md": false 00:08:06.686 }, 00:08:06.686 "memory_domains": [ 00:08:06.686 { 00:08:06.686 "dma_device_id": "system", 00:08:06.686 "dma_device_type": 1 00:08:06.686 }, 00:08:06.686 { 00:08:06.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.686 "dma_device_type": 2 00:08:06.686 }, 00:08:06.686 { 00:08:06.686 "dma_device_id": "system", 00:08:06.686 "dma_device_type": 1 00:08:06.686 }, 00:08:06.686 { 00:08:06.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.686 "dma_device_type": 2 00:08:06.686 } 00:08:06.686 ], 00:08:06.686 "driver_specific": { 00:08:06.686 "raid": { 00:08:06.686 "uuid": "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe", 00:08:06.686 "strip_size_kb": 64, 00:08:06.686 "state": "online", 00:08:06.686 "raid_level": "concat", 00:08:06.686 "superblock": true, 00:08:06.686 "num_base_bdevs": 2, 00:08:06.686 "num_base_bdevs_discovered": 2, 00:08:06.686 "num_base_bdevs_operational": 2, 00:08:06.686 "base_bdevs_list": [ 00:08:06.686 { 00:08:06.686 "name": "BaseBdev1", 00:08:06.686 "uuid": "5b9e3c77-d98d-4aa5-a59a-af4490db2015", 00:08:06.686 "is_configured": true, 00:08:06.686 "data_offset": 2048, 00:08:06.686 "data_size": 63488 00:08:06.686 }, 00:08:06.686 { 00:08:06.686 "name": "BaseBdev2", 00:08:06.686 "uuid": "f9ed6b99-e650-42b2-8431-92c7589fff5a", 00:08:06.686 "is_configured": true, 00:08:06.686 "data_offset": 2048, 00:08:06.686 "data_size": 63488 00:08:06.686 } 00:08:06.686 ] 00:08:06.686 } 00:08:06.686 } 00:08:06.686 }' 00:08:06.686 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.946 BaseBdev2' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.946 [2024-11-28 18:48:36.450681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.946 [2024-11-28 18:48:36.450710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.946 [2024-11-28 18:48:36.450770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.946 "name": "Existed_Raid", 00:08:06.946 "uuid": "2bf18fa9-e05f-48b3-81e7-aa1ff7392fbe", 00:08:06.946 "strip_size_kb": 64, 00:08:06.946 "state": "offline", 00:08:06.946 "raid_level": "concat", 00:08:06.946 "superblock": true, 00:08:06.946 "num_base_bdevs": 2, 00:08:06.946 "num_base_bdevs_discovered": 1, 00:08:06.946 "num_base_bdevs_operational": 1, 00:08:06.946 "base_bdevs_list": [ 00:08:06.946 { 00:08:06.946 "name": null, 00:08:06.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.946 "is_configured": false, 00:08:06.946 "data_offset": 0, 00:08:06.946 "data_size": 63488 00:08:06.946 }, 00:08:06.946 { 00:08:06.946 "name": "BaseBdev2", 00:08:06.946 "uuid": "f9ed6b99-e650-42b2-8431-92c7589fff5a", 00:08:06.946 "is_configured": true, 00:08:06.946 "data_offset": 2048, 00:08:06.946 "data_size": 63488 00:08:06.946 } 00:08:06.946 ] 00:08:06.946 }' 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.946 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 [2024-11-28 18:48:36.946047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.517 [2024-11-28 18:48:36.946158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 18:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74836 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74836 ']' 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74836 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.517 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74836 00:08:07.518 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.518 killing process with pid 74836 00:08:07.518 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.518 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74836' 00:08:07.518 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74836 00:08:07.518 [2024-11-28 18:48:37.052118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.518 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74836 00:08:07.518 [2024-11-28 18:48:37.053101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.778 18:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.778 00:08:07.778 real 0m3.742s 00:08:07.778 user 0m5.909s 00:08:07.778 sys 0m0.725s 00:08:07.778 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.778 18:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.779 ************************************ 00:08:07.779 END TEST raid_state_function_test_sb 00:08:07.779 ************************************ 00:08:07.779 18:48:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:07.779 18:48:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:07.779 18:48:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.779 18:48:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.779 ************************************ 00:08:07.779 START TEST raid_superblock_test 00:08:07.779 ************************************ 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75071 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75071 00:08:07.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75071 ']' 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.779 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.039 [2024-11-28 18:48:37.434468] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:08.039 [2024-11-28 18:48:37.434686] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75071 ] 00:08:08.039 [2024-11-28 18:48:37.568241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.039 [2024-11-28 18:48:37.605204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.039 [2024-11-28 18:48:37.630131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.299 [2024-11-28 18:48:37.672673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.299 [2024-11-28 18:48:37.672783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.870 malloc1 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.870 [2024-11-28 18:48:38.277004] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.870 [2024-11-28 18:48:38.277060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.870 [2024-11-28 18:48:38.277112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:08.870 [2024-11-28 18:48:38.277121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.870 [2024-11-28 18:48:38.279238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.870 [2024-11-28 18:48:38.279273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.870 pt1 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.870 malloc2 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.870 [2024-11-28 18:48:38.305393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.870 [2024-11-28 18:48:38.305510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.870 [2024-11-28 18:48:38.305562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:08.870 [2024-11-28 18:48:38.305590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.870 [2024-11-28 18:48:38.307640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.870 [2024-11-28 18:48:38.307709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.870 pt2 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.870 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.870 [2024-11-28 18:48:38.317420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.870 [2024-11-28 18:48:38.319278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.870 [2024-11-28 18:48:38.319477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:08.870 [2024-11-28 18:48:38.319523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:08.870 [2024-11-28 18:48:38.319801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:08.870 [2024-11-28 18:48:38.319957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:08.871 [2024-11-28 18:48:38.319998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:08.871 [2024-11-28 18:48:38.320150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.871 "name": "raid_bdev1", 00:08:08.871 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:08.871 "strip_size_kb": 64, 00:08:08.871 "state": "online", 00:08:08.871 "raid_level": "concat", 00:08:08.871 "superblock": true, 00:08:08.871 "num_base_bdevs": 2, 00:08:08.871 "num_base_bdevs_discovered": 2, 00:08:08.871 "num_base_bdevs_operational": 2, 00:08:08.871 "base_bdevs_list": [ 00:08:08.871 { 00:08:08.871 "name": "pt1", 00:08:08.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.871 "is_configured": true, 00:08:08.871 "data_offset": 2048, 00:08:08.871 "data_size": 63488 00:08:08.871 }, 00:08:08.871 { 00:08:08.871 "name": "pt2", 00:08:08.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.871 "is_configured": true, 00:08:08.871 "data_offset": 2048, 00:08:08.871 "data_size": 63488 00:08:08.871 } 00:08:08.871 ] 00:08:08.871 }' 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.871 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.130 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.390 [2024-11-28 18:48:38.741843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.390 "name": "raid_bdev1", 00:08:09.390 "aliases": [ 00:08:09.390 "b0adf8a8-88e7-4f7b-a663-8f790c3384d9" 00:08:09.390 ], 00:08:09.390 "product_name": "Raid Volume", 00:08:09.390 "block_size": 512, 00:08:09.390 "num_blocks": 126976, 00:08:09.390 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:09.390 "assigned_rate_limits": { 00:08:09.390 "rw_ios_per_sec": 0, 00:08:09.390 "rw_mbytes_per_sec": 0, 00:08:09.390 "r_mbytes_per_sec": 0, 00:08:09.390 "w_mbytes_per_sec": 0 00:08:09.390 }, 00:08:09.390 "claimed": false, 00:08:09.390 "zoned": false, 00:08:09.390 "supported_io_types": { 00:08:09.390 "read": true, 00:08:09.390 "write": true, 00:08:09.390 "unmap": true, 00:08:09.390 "flush": true, 00:08:09.390 "reset": true, 00:08:09.390 "nvme_admin": false, 00:08:09.390 "nvme_io": false, 00:08:09.390 "nvme_io_md": false, 00:08:09.390 "write_zeroes": true, 00:08:09.390 "zcopy": false, 00:08:09.390 "get_zone_info": false, 00:08:09.390 "zone_management": false, 00:08:09.390 "zone_append": false, 00:08:09.390 "compare": false, 00:08:09.390 "compare_and_write": false, 00:08:09.390 "abort": false, 00:08:09.390 "seek_hole": false, 00:08:09.390 "seek_data": false, 00:08:09.390 "copy": false, 00:08:09.390 "nvme_iov_md": false 00:08:09.390 }, 00:08:09.390 "memory_domains": [ 00:08:09.390 { 00:08:09.390 "dma_device_id": "system", 00:08:09.390 "dma_device_type": 1 00:08:09.390 }, 00:08:09.390 { 00:08:09.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.390 "dma_device_type": 2 00:08:09.390 }, 00:08:09.390 { 00:08:09.390 "dma_device_id": "system", 00:08:09.390 "dma_device_type": 1 00:08:09.390 }, 00:08:09.390 { 00:08:09.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.390 "dma_device_type": 2 00:08:09.390 } 00:08:09.390 ], 00:08:09.390 "driver_specific": { 00:08:09.390 "raid": { 00:08:09.390 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:09.390 "strip_size_kb": 64, 00:08:09.390 "state": "online", 00:08:09.390 "raid_level": "concat", 00:08:09.390 "superblock": true, 00:08:09.390 "num_base_bdevs": 2, 00:08:09.390 "num_base_bdevs_discovered": 2, 00:08:09.390 "num_base_bdevs_operational": 2, 00:08:09.390 "base_bdevs_list": [ 00:08:09.390 { 00:08:09.390 "name": "pt1", 00:08:09.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.390 "is_configured": true, 00:08:09.390 "data_offset": 2048, 00:08:09.390 "data_size": 63488 00:08:09.390 }, 00:08:09.390 { 00:08:09.390 "name": "pt2", 00:08:09.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.390 "is_configured": true, 00:08:09.390 "data_offset": 2048, 00:08:09.390 "data_size": 63488 00:08:09.390 } 00:08:09.390 ] 00:08:09.390 } 00:08:09.390 } 00:08:09.390 }' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.390 pt2' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 [2024-11-28 18:48:38.961837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.390 18:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.650 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b0adf8a8-88e7-4f7b-a663-8f790c3384d9 00:08:09.650 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b0adf8a8-88e7-4f7b-a663-8f790c3384d9 ']' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 [2024-11-28 18:48:39.005621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.651 [2024-11-28 18:48:39.005650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.651 [2024-11-28 18:48:39.005725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.651 [2024-11-28 18:48:39.005781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.651 [2024-11-28 18:48:39.005796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 [2024-11-28 18:48:39.137677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:09.651 [2024-11-28 18:48:39.139550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:09.651 [2024-11-28 18:48:39.139613] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:09.651 [2024-11-28 18:48:39.139660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:09.651 [2024-11-28 18:48:39.139675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.651 [2024-11-28 18:48:39.139685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:09.651 request: 00:08:09.651 { 00:08:09.651 "name": "raid_bdev1", 00:08:09.651 "raid_level": "concat", 00:08:09.651 "base_bdevs": [ 00:08:09.651 "malloc1", 00:08:09.651 "malloc2" 00:08:09.651 ], 00:08:09.651 "strip_size_kb": 64, 00:08:09.651 "superblock": false, 00:08:09.651 "method": "bdev_raid_create", 00:08:09.651 "req_id": 1 00:08:09.651 } 00:08:09.651 Got JSON-RPC error response 00:08:09.651 response: 00:08:09.651 { 00:08:09.651 "code": -17, 00:08:09.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:09.651 } 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.651 [2024-11-28 18:48:39.201666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.651 [2024-11-28 18:48:39.201760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.651 [2024-11-28 18:48:39.201793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:09.651 [2024-11-28 18:48:39.201823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.651 [2024-11-28 18:48:39.203982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.651 [2024-11-28 18:48:39.204053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.651 [2024-11-28 18:48:39.204168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:09.651 [2024-11-28 18:48:39.204240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.651 pt1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.651 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.652 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.652 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.652 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.652 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.652 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.911 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.911 "name": "raid_bdev1", 00:08:09.911 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:09.911 "strip_size_kb": 64, 00:08:09.911 "state": "configuring", 00:08:09.911 "raid_level": "concat", 00:08:09.911 "superblock": true, 00:08:09.911 "num_base_bdevs": 2, 00:08:09.911 "num_base_bdevs_discovered": 1, 00:08:09.911 "num_base_bdevs_operational": 2, 00:08:09.911 "base_bdevs_list": [ 00:08:09.911 { 00:08:09.911 "name": "pt1", 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.911 "is_configured": true, 00:08:09.911 "data_offset": 2048, 00:08:09.911 "data_size": 63488 00:08:09.911 }, 00:08:09.911 { 00:08:09.911 "name": null, 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.911 "is_configured": false, 00:08:09.911 "data_offset": 2048, 00:08:09.911 "data_size": 63488 00:08:09.911 } 00:08:09.911 ] 00:08:09.911 }' 00:08:09.911 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.911 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.171 [2024-11-28 18:48:39.657836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.171 [2024-11-28 18:48:39.657915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.171 [2024-11-28 18:48:39.657940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:10.171 [2024-11-28 18:48:39.657951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.171 [2024-11-28 18:48:39.658360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.171 [2024-11-28 18:48:39.658387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.171 [2024-11-28 18:48:39.658477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:10.171 [2024-11-28 18:48:39.658504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.171 [2024-11-28 18:48:39.658592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:10.171 [2024-11-28 18:48:39.658609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.171 [2024-11-28 18:48:39.658853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:10.171 [2024-11-28 18:48:39.658971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:10.171 [2024-11-28 18:48:39.658979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:10.171 [2024-11-28 18:48:39.659113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.171 pt2 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.171 "name": "raid_bdev1", 00:08:10.171 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:10.171 "strip_size_kb": 64, 00:08:10.171 "state": "online", 00:08:10.171 "raid_level": "concat", 00:08:10.171 "superblock": true, 00:08:10.171 "num_base_bdevs": 2, 00:08:10.171 "num_base_bdevs_discovered": 2, 00:08:10.171 "num_base_bdevs_operational": 2, 00:08:10.171 "base_bdevs_list": [ 00:08:10.171 { 00:08:10.171 "name": "pt1", 00:08:10.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.171 "is_configured": true, 00:08:10.171 "data_offset": 2048, 00:08:10.171 "data_size": 63488 00:08:10.171 }, 00:08:10.171 { 00:08:10.171 "name": "pt2", 00:08:10.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.171 "is_configured": true, 00:08:10.171 "data_offset": 2048, 00:08:10.171 "data_size": 63488 00:08:10.171 } 00:08:10.171 ] 00:08:10.171 }' 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.171 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.740 [2024-11-28 18:48:40.058181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.740 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.740 "name": "raid_bdev1", 00:08:10.740 "aliases": [ 00:08:10.740 "b0adf8a8-88e7-4f7b-a663-8f790c3384d9" 00:08:10.740 ], 00:08:10.740 "product_name": "Raid Volume", 00:08:10.740 "block_size": 512, 00:08:10.740 "num_blocks": 126976, 00:08:10.740 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:10.740 "assigned_rate_limits": { 00:08:10.740 "rw_ios_per_sec": 0, 00:08:10.740 "rw_mbytes_per_sec": 0, 00:08:10.740 "r_mbytes_per_sec": 0, 00:08:10.740 "w_mbytes_per_sec": 0 00:08:10.740 }, 00:08:10.740 "claimed": false, 00:08:10.740 "zoned": false, 00:08:10.740 "supported_io_types": { 00:08:10.740 "read": true, 00:08:10.740 "write": true, 00:08:10.740 "unmap": true, 00:08:10.740 "flush": true, 00:08:10.740 "reset": true, 00:08:10.740 "nvme_admin": false, 00:08:10.740 "nvme_io": false, 00:08:10.740 "nvme_io_md": false, 00:08:10.740 "write_zeroes": true, 00:08:10.740 "zcopy": false, 00:08:10.740 "get_zone_info": false, 00:08:10.740 "zone_management": false, 00:08:10.740 "zone_append": false, 00:08:10.740 "compare": false, 00:08:10.740 "compare_and_write": false, 00:08:10.740 "abort": false, 00:08:10.740 "seek_hole": false, 00:08:10.741 "seek_data": false, 00:08:10.741 "copy": false, 00:08:10.741 "nvme_iov_md": false 00:08:10.741 }, 00:08:10.741 "memory_domains": [ 00:08:10.741 { 00:08:10.741 "dma_device_id": "system", 00:08:10.741 "dma_device_type": 1 00:08:10.741 }, 00:08:10.741 { 00:08:10.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.741 "dma_device_type": 2 00:08:10.741 }, 00:08:10.741 { 00:08:10.741 "dma_device_id": "system", 00:08:10.741 "dma_device_type": 1 00:08:10.741 }, 00:08:10.741 { 00:08:10.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.741 "dma_device_type": 2 00:08:10.741 } 00:08:10.741 ], 00:08:10.741 "driver_specific": { 00:08:10.741 "raid": { 00:08:10.741 "uuid": "b0adf8a8-88e7-4f7b-a663-8f790c3384d9", 00:08:10.741 "strip_size_kb": 64, 00:08:10.741 "state": "online", 00:08:10.741 "raid_level": "concat", 00:08:10.741 "superblock": true, 00:08:10.741 "num_base_bdevs": 2, 00:08:10.741 "num_base_bdevs_discovered": 2, 00:08:10.741 "num_base_bdevs_operational": 2, 00:08:10.741 "base_bdevs_list": [ 00:08:10.741 { 00:08:10.741 "name": "pt1", 00:08:10.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.741 "is_configured": true, 00:08:10.741 "data_offset": 2048, 00:08:10.741 "data_size": 63488 00:08:10.741 }, 00:08:10.741 { 00:08:10.741 "name": "pt2", 00:08:10.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.741 "is_configured": true, 00:08:10.741 "data_offset": 2048, 00:08:10.741 "data_size": 63488 00:08:10.741 } 00:08:10.741 ] 00:08:10.741 } 00:08:10.741 } 00:08:10.741 }' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:10.741 pt2' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.741 [2024-11-28 18:48:40.318200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.741 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b0adf8a8-88e7-4f7b-a663-8f790c3384d9 '!=' b0adf8a8-88e7-4f7b-a663-8f790c3384d9 ']' 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75071 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75071 ']' 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75071 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75071 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.001 killing process with pid 75071 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75071' 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75071 00:08:11.001 [2024-11-28 18:48:40.374605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.001 [2024-11-28 18:48:40.374714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.001 [2024-11-28 18:48:40.374763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.001 [2024-11-28 18:48:40.374776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:11.001 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75071 00:08:11.001 [2024-11-28 18:48:40.398052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.262 ************************************ 00:08:11.262 END TEST raid_superblock_test 00:08:11.262 ************************************ 00:08:11.262 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:11.262 00:08:11.262 real 0m3.268s 00:08:11.262 user 0m5.046s 00:08:11.262 sys 0m0.691s 00:08:11.262 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.262 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.262 18:48:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:11.262 18:48:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:11.263 18:48:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.263 18:48:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.263 ************************************ 00:08:11.263 START TEST raid_read_error_test 00:08:11.263 ************************************ 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fzb1HGEa7c 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75272 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75272 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75272 ']' 00:08:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.263 18:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.263 [2024-11-28 18:48:40.788951] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:11.263 [2024-11-28 18:48:40.789076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75272 ] 00:08:11.523 [2024-11-28 18:48:40.922624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.523 [2024-11-28 18:48:40.955198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.523 [2024-11-28 18:48:40.980451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.523 [2024-11-28 18:48:41.022103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.523 [2024-11-28 18:48:41.022139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 BaseBdev1_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 true 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 [2024-11-28 18:48:41.634176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.094 [2024-11-28 18:48:41.634230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.094 [2024-11-28 18:48:41.634262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:12.094 [2024-11-28 18:48:41.634274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.094 [2024-11-28 18:48:41.636388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.094 [2024-11-28 18:48:41.636438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:12.094 BaseBdev1 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 BaseBdev2_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 true 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 [2024-11-28 18:48:41.674803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:12.094 [2024-11-28 18:48:41.674849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.094 [2024-11-28 18:48:41.674880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:12.094 [2024-11-28 18:48:41.674889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.094 [2024-11-28 18:48:41.676947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.094 [2024-11-28 18:48:41.677040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:12.094 BaseBdev2 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.094 [2024-11-28 18:48:41.686843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.094 [2024-11-28 18:48:41.688703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.094 [2024-11-28 18:48:41.688879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:12.094 [2024-11-28 18:48:41.688893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:12.094 [2024-11-28 18:48:41.689122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:12.094 [2024-11-28 18:48:41.689272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:12.094 [2024-11-28 18:48:41.689292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:12.094 [2024-11-28 18:48:41.689414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.094 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.354 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.354 "name": "raid_bdev1", 00:08:12.354 "uuid": "5807d3cb-bd7f-42a5-beb0-d53b404acb69", 00:08:12.354 "strip_size_kb": 64, 00:08:12.354 "state": "online", 00:08:12.354 "raid_level": "concat", 00:08:12.354 "superblock": true, 00:08:12.354 "num_base_bdevs": 2, 00:08:12.354 "num_base_bdevs_discovered": 2, 00:08:12.354 "num_base_bdevs_operational": 2, 00:08:12.354 "base_bdevs_list": [ 00:08:12.354 { 00:08:12.354 "name": "BaseBdev1", 00:08:12.354 "uuid": "1b83912c-fa24-5e52-b377-e7218181c4fd", 00:08:12.354 "is_configured": true, 00:08:12.354 "data_offset": 2048, 00:08:12.354 "data_size": 63488 00:08:12.354 }, 00:08:12.354 { 00:08:12.354 "name": "BaseBdev2", 00:08:12.354 "uuid": "142f5915-a965-5fa5-af8b-527d2c36daf1", 00:08:12.354 "is_configured": true, 00:08:12.355 "data_offset": 2048, 00:08:12.355 "data_size": 63488 00:08:12.355 } 00:08:12.355 ] 00:08:12.355 }' 00:08:12.355 18:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.355 18:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.614 18:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.614 18:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.614 [2024-11-28 18:48:42.207317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.554 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.814 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.814 "name": "raid_bdev1", 00:08:13.814 "uuid": "5807d3cb-bd7f-42a5-beb0-d53b404acb69", 00:08:13.814 "strip_size_kb": 64, 00:08:13.814 "state": "online", 00:08:13.814 "raid_level": "concat", 00:08:13.814 "superblock": true, 00:08:13.814 "num_base_bdevs": 2, 00:08:13.814 "num_base_bdevs_discovered": 2, 00:08:13.814 "num_base_bdevs_operational": 2, 00:08:13.814 "base_bdevs_list": [ 00:08:13.814 { 00:08:13.814 "name": "BaseBdev1", 00:08:13.814 "uuid": "1b83912c-fa24-5e52-b377-e7218181c4fd", 00:08:13.814 "is_configured": true, 00:08:13.814 "data_offset": 2048, 00:08:13.814 "data_size": 63488 00:08:13.814 }, 00:08:13.814 { 00:08:13.814 "name": "BaseBdev2", 00:08:13.814 "uuid": "142f5915-a965-5fa5-af8b-527d2c36daf1", 00:08:13.814 "is_configured": true, 00:08:13.814 "data_offset": 2048, 00:08:13.814 "data_size": 63488 00:08:13.814 } 00:08:13.814 ] 00:08:13.814 }' 00:08:13.814 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.814 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.075 [2024-11-28 18:48:43.597822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.075 [2024-11-28 18:48:43.597916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.075 [2024-11-28 18:48:43.600471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.075 [2024-11-28 18:48:43.600553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.075 [2024-11-28 18:48:43.600602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.075 [2024-11-28 18:48:43.600643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:14.075 { 00:08:14.075 "results": [ 00:08:14.075 { 00:08:14.075 "job": "raid_bdev1", 00:08:14.075 "core_mask": "0x1", 00:08:14.075 "workload": "randrw", 00:08:14.075 "percentage": 50, 00:08:14.075 "status": "finished", 00:08:14.075 "queue_depth": 1, 00:08:14.075 "io_size": 131072, 00:08:14.075 "runtime": 1.388744, 00:08:14.075 "iops": 17465.421992822292, 00:08:14.075 "mibps": 2183.1777491027865, 00:08:14.075 "io_failed": 1, 00:08:14.075 "io_timeout": 0, 00:08:14.075 "avg_latency_us": 78.76673626527534, 00:08:14.075 "min_latency_us": 24.321450361718817, 00:08:14.075 "max_latency_us": 1392.3472500653709 00:08:14.075 } 00:08:14.075 ], 00:08:14.075 "core_count": 1 00:08:14.075 } 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75272 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75272 ']' 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75272 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75272 00:08:14.075 killing process with pid 75272 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75272' 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75272 00:08:14.075 [2024-11-28 18:48:43.649984] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.075 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75272 00:08:14.075 [2024-11-28 18:48:43.665272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.335 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fzb1HGEa7c 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:14.336 ************************************ 00:08:14.336 END TEST raid_read_error_test 00:08:14.336 ************************************ 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:14.336 00:08:14.336 real 0m3.197s 00:08:14.336 user 0m4.097s 00:08:14.336 sys 0m0.485s 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.336 18:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.596 18:48:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:14.596 18:48:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.596 18:48:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.596 18:48:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.596 ************************************ 00:08:14.596 START TEST raid_write_error_test 00:08:14.596 ************************************ 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pl0BvREPog 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75401 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75401 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75401 ']' 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.596 18:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.596 [2024-11-28 18:48:44.055415] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:14.597 [2024-11-28 18:48:44.055560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75401 ] 00:08:14.597 [2024-11-28 18:48:44.189058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.856 [2024-11-28 18:48:44.226707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.856 [2024-11-28 18:48:44.251603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.856 [2024-11-28 18:48:44.293231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.856 [2024-11-28 18:48:44.293272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 BaseBdev1_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 true 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 [2024-11-28 18:48:44.909149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.426 [2024-11-28 18:48:44.909227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.426 [2024-11-28 18:48:44.909246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:15.426 [2024-11-28 18:48:44.909258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.426 [2024-11-28 18:48:44.911382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.426 [2024-11-28 18:48:44.911420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.426 BaseBdev1 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 BaseBdev2_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 true 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 [2024-11-28 18:48:44.949551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.426 [2024-11-28 18:48:44.949598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.426 [2024-11-28 18:48:44.949614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:15.426 [2024-11-28 18:48:44.949624] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.426 [2024-11-28 18:48:44.951639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.426 [2024-11-28 18:48:44.951686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.426 BaseBdev2 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.426 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.426 [2024-11-28 18:48:44.961604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.426 [2024-11-28 18:48:44.963403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.426 [2024-11-28 18:48:44.963578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:15.426 [2024-11-28 18:48:44.963600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.427 [2024-11-28 18:48:44.963840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:15.427 [2024-11-28 18:48:44.963994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:15.427 [2024-11-28 18:48:44.964012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:15.427 [2024-11-28 18:48:44.964144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.427 18:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.427 18:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.427 "name": "raid_bdev1", 00:08:15.427 "uuid": "6c26084b-a158-4e69-8749-00da0dda40d4", 00:08:15.427 "strip_size_kb": 64, 00:08:15.427 "state": "online", 00:08:15.427 "raid_level": "concat", 00:08:15.427 "superblock": true, 00:08:15.427 "num_base_bdevs": 2, 00:08:15.427 "num_base_bdevs_discovered": 2, 00:08:15.427 "num_base_bdevs_operational": 2, 00:08:15.427 "base_bdevs_list": [ 00:08:15.427 { 00:08:15.427 "name": "BaseBdev1", 00:08:15.427 "uuid": "024f11ec-6476-5850-a464-ca9d8a7ceac1", 00:08:15.427 "is_configured": true, 00:08:15.427 "data_offset": 2048, 00:08:15.427 "data_size": 63488 00:08:15.427 }, 00:08:15.427 { 00:08:15.427 "name": "BaseBdev2", 00:08:15.427 "uuid": "07a1adcb-c93d-5db1-8fb6-72da7c56ef8d", 00:08:15.427 "is_configured": true, 00:08:15.427 "data_offset": 2048, 00:08:15.427 "data_size": 63488 00:08:15.427 } 00:08:15.427 ] 00:08:15.427 }' 00:08:15.427 18:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.427 18:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.007 18:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:16.007 18:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:16.007 [2024-11-28 18:48:45.490063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.005 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.005 "name": "raid_bdev1", 00:08:17.005 "uuid": "6c26084b-a158-4e69-8749-00da0dda40d4", 00:08:17.005 "strip_size_kb": 64, 00:08:17.005 "state": "online", 00:08:17.005 "raid_level": "concat", 00:08:17.005 "superblock": true, 00:08:17.005 "num_base_bdevs": 2, 00:08:17.005 "num_base_bdevs_discovered": 2, 00:08:17.005 "num_base_bdevs_operational": 2, 00:08:17.005 "base_bdevs_list": [ 00:08:17.005 { 00:08:17.005 "name": "BaseBdev1", 00:08:17.005 "uuid": "024f11ec-6476-5850-a464-ca9d8a7ceac1", 00:08:17.005 "is_configured": true, 00:08:17.005 "data_offset": 2048, 00:08:17.005 "data_size": 63488 00:08:17.005 }, 00:08:17.005 { 00:08:17.005 "name": "BaseBdev2", 00:08:17.005 "uuid": "07a1adcb-c93d-5db1-8fb6-72da7c56ef8d", 00:08:17.005 "is_configured": true, 00:08:17.005 "data_offset": 2048, 00:08:17.006 "data_size": 63488 00:08:17.006 } 00:08:17.006 ] 00:08:17.006 }' 00:08:17.006 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.006 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.265 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:17.265 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.265 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.265 [2024-11-28 18:48:46.832215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:17.265 [2024-11-28 18:48:46.832256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.265 [2024-11-28 18:48:46.834752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.265 [2024-11-28 18:48:46.834820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.265 [2024-11-28 18:48:46.834851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.266 [2024-11-28 18:48:46.834864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:17.266 { 00:08:17.266 "results": [ 00:08:17.266 { 00:08:17.266 "job": "raid_bdev1", 00:08:17.266 "core_mask": "0x1", 00:08:17.266 "workload": "randrw", 00:08:17.266 "percentage": 50, 00:08:17.266 "status": "finished", 00:08:17.266 "queue_depth": 1, 00:08:17.266 "io_size": 131072, 00:08:17.266 "runtime": 1.340389, 00:08:17.266 "iops": 17377.791074083718, 00:08:17.266 "mibps": 2172.2238842604647, 00:08:17.266 "io_failed": 1, 00:08:17.266 "io_timeout": 0, 00:08:17.266 "avg_latency_us": 79.17821820211063, 00:08:17.266 "min_latency_us": 24.76771550597054, 00:08:17.266 "max_latency_us": 1378.0667654493159 00:08:17.266 } 00:08:17.266 ], 00:08:17.266 "core_count": 1 00:08:17.266 } 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75401 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75401 ']' 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75401 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.266 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75401 00:08:17.526 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.526 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.526 killing process with pid 75401 00:08:17.526 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75401' 00:08:17.526 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75401 00:08:17.526 [2024-11-28 18:48:46.884220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.526 18:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75401 00:08:17.526 [2024-11-28 18:48:46.899728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pl0BvREPog 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:17.526 00:08:17.526 real 0m3.161s 00:08:17.526 user 0m4.017s 00:08:17.526 sys 0m0.507s 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.526 18:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.526 ************************************ 00:08:17.526 END TEST raid_write_error_test 00:08:17.526 ************************************ 00:08:17.786 18:48:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:17.786 18:48:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:17.786 18:48:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.786 18:48:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.786 18:48:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.786 ************************************ 00:08:17.786 START TEST raid_state_function_test 00:08:17.786 ************************************ 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:17.786 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75528 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75528' 00:08:17.787 Process raid pid: 75528 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75528 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75528 ']' 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.787 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.787 [2024-11-28 18:48:47.283701] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:17.787 [2024-11-28 18:48:47.283839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.046 [2024-11-28 18:48:47.419009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.046 [2024-11-28 18:48:47.458006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.046 [2024-11-28 18:48:47.482748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.046 [2024-11-28 18:48:47.524261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.046 [2024-11-28 18:48:47.524295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.615 [2024-11-28 18:48:48.103772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.615 [2024-11-28 18:48:48.103834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.615 [2024-11-28 18:48:48.103846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.615 [2024-11-28 18:48:48.103854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.615 "name": "Existed_Raid", 00:08:18.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.615 "strip_size_kb": 0, 00:08:18.615 "state": "configuring", 00:08:18.615 "raid_level": "raid1", 00:08:18.615 "superblock": false, 00:08:18.615 "num_base_bdevs": 2, 00:08:18.615 "num_base_bdevs_discovered": 0, 00:08:18.615 "num_base_bdevs_operational": 2, 00:08:18.615 "base_bdevs_list": [ 00:08:18.615 { 00:08:18.615 "name": "BaseBdev1", 00:08:18.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.615 "is_configured": false, 00:08:18.615 "data_offset": 0, 00:08:18.615 "data_size": 0 00:08:18.615 }, 00:08:18.615 { 00:08:18.615 "name": "BaseBdev2", 00:08:18.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.615 "is_configured": false, 00:08:18.615 "data_offset": 0, 00:08:18.615 "data_size": 0 00:08:18.615 } 00:08:18.615 ] 00:08:18.615 }' 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.615 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 [2024-11-28 18:48:48.539785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.182 [2024-11-28 18:48:48.539822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 [2024-11-28 18:48:48.551817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.182 [2024-11-28 18:48:48.551855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.182 [2024-11-28 18:48:48.551865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.182 [2024-11-28 18:48:48.551872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 [2024-11-28 18:48:48.572714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.182 BaseBdev1 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 [ 00:08:19.182 { 00:08:19.182 "name": "BaseBdev1", 00:08:19.182 "aliases": [ 00:08:19.182 "a802c07e-543b-4ed1-b3cc-48307836cfe7" 00:08:19.182 ], 00:08:19.182 "product_name": "Malloc disk", 00:08:19.182 "block_size": 512, 00:08:19.182 "num_blocks": 65536, 00:08:19.182 "uuid": "a802c07e-543b-4ed1-b3cc-48307836cfe7", 00:08:19.182 "assigned_rate_limits": { 00:08:19.182 "rw_ios_per_sec": 0, 00:08:19.182 "rw_mbytes_per_sec": 0, 00:08:19.182 "r_mbytes_per_sec": 0, 00:08:19.182 "w_mbytes_per_sec": 0 00:08:19.182 }, 00:08:19.182 "claimed": true, 00:08:19.182 "claim_type": "exclusive_write", 00:08:19.182 "zoned": false, 00:08:19.182 "supported_io_types": { 00:08:19.182 "read": true, 00:08:19.182 "write": true, 00:08:19.182 "unmap": true, 00:08:19.182 "flush": true, 00:08:19.182 "reset": true, 00:08:19.182 "nvme_admin": false, 00:08:19.182 "nvme_io": false, 00:08:19.182 "nvme_io_md": false, 00:08:19.182 "write_zeroes": true, 00:08:19.182 "zcopy": true, 00:08:19.182 "get_zone_info": false, 00:08:19.182 "zone_management": false, 00:08:19.182 "zone_append": false, 00:08:19.182 "compare": false, 00:08:19.182 "compare_and_write": false, 00:08:19.182 "abort": true, 00:08:19.182 "seek_hole": false, 00:08:19.182 "seek_data": false, 00:08:19.182 "copy": true, 00:08:19.182 "nvme_iov_md": false 00:08:19.182 }, 00:08:19.182 "memory_domains": [ 00:08:19.182 { 00:08:19.182 "dma_device_id": "system", 00:08:19.182 "dma_device_type": 1 00:08:19.182 }, 00:08:19.182 { 00:08:19.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.182 "dma_device_type": 2 00:08:19.182 } 00:08:19.182 ], 00:08:19.182 "driver_specific": {} 00:08:19.182 } 00:08:19.182 ] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.182 "name": "Existed_Raid", 00:08:19.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.182 "strip_size_kb": 0, 00:08:19.182 "state": "configuring", 00:08:19.182 "raid_level": "raid1", 00:08:19.182 "superblock": false, 00:08:19.182 "num_base_bdevs": 2, 00:08:19.182 "num_base_bdevs_discovered": 1, 00:08:19.182 "num_base_bdevs_operational": 2, 00:08:19.182 "base_bdevs_list": [ 00:08:19.182 { 00:08:19.182 "name": "BaseBdev1", 00:08:19.182 "uuid": "a802c07e-543b-4ed1-b3cc-48307836cfe7", 00:08:19.182 "is_configured": true, 00:08:19.182 "data_offset": 0, 00:08:19.182 "data_size": 65536 00:08:19.182 }, 00:08:19.182 { 00:08:19.182 "name": "BaseBdev2", 00:08:19.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.182 "is_configured": false, 00:08:19.182 "data_offset": 0, 00:08:19.182 "data_size": 0 00:08:19.182 } 00:08:19.182 ] 00:08:19.182 }' 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.182 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.440 [2024-11-28 18:48:49.024863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.440 [2024-11-28 18:48:49.024915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.440 [2024-11-28 18:48:49.036895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.440 [2024-11-28 18:48:49.038681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.440 [2024-11-28 18:48:49.038718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.440 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.699 "name": "Existed_Raid", 00:08:19.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.699 "strip_size_kb": 0, 00:08:19.699 "state": "configuring", 00:08:19.699 "raid_level": "raid1", 00:08:19.699 "superblock": false, 00:08:19.699 "num_base_bdevs": 2, 00:08:19.699 "num_base_bdevs_discovered": 1, 00:08:19.699 "num_base_bdevs_operational": 2, 00:08:19.699 "base_bdevs_list": [ 00:08:19.699 { 00:08:19.699 "name": "BaseBdev1", 00:08:19.699 "uuid": "a802c07e-543b-4ed1-b3cc-48307836cfe7", 00:08:19.699 "is_configured": true, 00:08:19.699 "data_offset": 0, 00:08:19.699 "data_size": 65536 00:08:19.699 }, 00:08:19.699 { 00:08:19.699 "name": "BaseBdev2", 00:08:19.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.699 "is_configured": false, 00:08:19.699 "data_offset": 0, 00:08:19.699 "data_size": 0 00:08:19.699 } 00:08:19.699 ] 00:08:19.699 }' 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.699 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.958 [2024-11-28 18:48:49.515953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.958 [2024-11-28 18:48:49.515999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:19.958 [2024-11-28 18:48:49.516009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:19.958 [2024-11-28 18:48:49.516247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:19.958 [2024-11-28 18:48:49.516390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:19.958 [2024-11-28 18:48:49.516417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:19.958 [2024-11-28 18:48:49.516678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.958 BaseBdev2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.958 [ 00:08:19.958 { 00:08:19.958 "name": "BaseBdev2", 00:08:19.958 "aliases": [ 00:08:19.958 "1da2c445-9d80-424e-9f23-61d69c8b3de4" 00:08:19.958 ], 00:08:19.958 "product_name": "Malloc disk", 00:08:19.958 "block_size": 512, 00:08:19.958 "num_blocks": 65536, 00:08:19.958 "uuid": "1da2c445-9d80-424e-9f23-61d69c8b3de4", 00:08:19.958 "assigned_rate_limits": { 00:08:19.958 "rw_ios_per_sec": 0, 00:08:19.958 "rw_mbytes_per_sec": 0, 00:08:19.958 "r_mbytes_per_sec": 0, 00:08:19.958 "w_mbytes_per_sec": 0 00:08:19.958 }, 00:08:19.958 "claimed": true, 00:08:19.958 "claim_type": "exclusive_write", 00:08:19.958 "zoned": false, 00:08:19.958 "supported_io_types": { 00:08:19.958 "read": true, 00:08:19.958 "write": true, 00:08:19.958 "unmap": true, 00:08:19.958 "flush": true, 00:08:19.958 "reset": true, 00:08:19.958 "nvme_admin": false, 00:08:19.958 "nvme_io": false, 00:08:19.958 "nvme_io_md": false, 00:08:19.958 "write_zeroes": true, 00:08:19.958 "zcopy": true, 00:08:19.958 "get_zone_info": false, 00:08:19.958 "zone_management": false, 00:08:19.958 "zone_append": false, 00:08:19.958 "compare": false, 00:08:19.958 "compare_and_write": false, 00:08:19.958 "abort": true, 00:08:19.958 "seek_hole": false, 00:08:19.958 "seek_data": false, 00:08:19.958 "copy": true, 00:08:19.958 "nvme_iov_md": false 00:08:19.958 }, 00:08:19.958 "memory_domains": [ 00:08:19.958 { 00:08:19.958 "dma_device_id": "system", 00:08:19.958 "dma_device_type": 1 00:08:19.958 }, 00:08:19.958 { 00:08:19.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.958 "dma_device_type": 2 00:08:19.958 } 00:08:19.958 ], 00:08:19.958 "driver_specific": {} 00:08:19.958 } 00:08:19.958 ] 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.958 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.216 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.216 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.216 "name": "Existed_Raid", 00:08:20.216 "uuid": "83411c4d-b042-4284-8407-da8ad716a910", 00:08:20.216 "strip_size_kb": 0, 00:08:20.216 "state": "online", 00:08:20.216 "raid_level": "raid1", 00:08:20.216 "superblock": false, 00:08:20.216 "num_base_bdevs": 2, 00:08:20.216 "num_base_bdevs_discovered": 2, 00:08:20.216 "num_base_bdevs_operational": 2, 00:08:20.216 "base_bdevs_list": [ 00:08:20.216 { 00:08:20.216 "name": "BaseBdev1", 00:08:20.216 "uuid": "a802c07e-543b-4ed1-b3cc-48307836cfe7", 00:08:20.216 "is_configured": true, 00:08:20.216 "data_offset": 0, 00:08:20.216 "data_size": 65536 00:08:20.216 }, 00:08:20.216 { 00:08:20.216 "name": "BaseBdev2", 00:08:20.216 "uuid": "1da2c445-9d80-424e-9f23-61d69c8b3de4", 00:08:20.216 "is_configured": true, 00:08:20.216 "data_offset": 0, 00:08:20.216 "data_size": 65536 00:08:20.216 } 00:08:20.216 ] 00:08:20.216 }' 00:08:20.216 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.216 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.476 [2024-11-28 18:48:49.992385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.476 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.476 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.476 "name": "Existed_Raid", 00:08:20.476 "aliases": [ 00:08:20.476 "83411c4d-b042-4284-8407-da8ad716a910" 00:08:20.476 ], 00:08:20.476 "product_name": "Raid Volume", 00:08:20.476 "block_size": 512, 00:08:20.476 "num_blocks": 65536, 00:08:20.476 "uuid": "83411c4d-b042-4284-8407-da8ad716a910", 00:08:20.476 "assigned_rate_limits": { 00:08:20.476 "rw_ios_per_sec": 0, 00:08:20.476 "rw_mbytes_per_sec": 0, 00:08:20.476 "r_mbytes_per_sec": 0, 00:08:20.476 "w_mbytes_per_sec": 0 00:08:20.476 }, 00:08:20.476 "claimed": false, 00:08:20.476 "zoned": false, 00:08:20.476 "supported_io_types": { 00:08:20.476 "read": true, 00:08:20.476 "write": true, 00:08:20.476 "unmap": false, 00:08:20.476 "flush": false, 00:08:20.476 "reset": true, 00:08:20.476 "nvme_admin": false, 00:08:20.476 "nvme_io": false, 00:08:20.476 "nvme_io_md": false, 00:08:20.476 "write_zeroes": true, 00:08:20.476 "zcopy": false, 00:08:20.476 "get_zone_info": false, 00:08:20.476 "zone_management": false, 00:08:20.476 "zone_append": false, 00:08:20.476 "compare": false, 00:08:20.476 "compare_and_write": false, 00:08:20.476 "abort": false, 00:08:20.476 "seek_hole": false, 00:08:20.476 "seek_data": false, 00:08:20.476 "copy": false, 00:08:20.476 "nvme_iov_md": false 00:08:20.476 }, 00:08:20.476 "memory_domains": [ 00:08:20.476 { 00:08:20.476 "dma_device_id": "system", 00:08:20.476 "dma_device_type": 1 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.476 "dma_device_type": 2 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "dma_device_id": "system", 00:08:20.476 "dma_device_type": 1 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.476 "dma_device_type": 2 00:08:20.476 } 00:08:20.476 ], 00:08:20.476 "driver_specific": { 00:08:20.476 "raid": { 00:08:20.476 "uuid": "83411c4d-b042-4284-8407-da8ad716a910", 00:08:20.476 "strip_size_kb": 0, 00:08:20.476 "state": "online", 00:08:20.476 "raid_level": "raid1", 00:08:20.476 "superblock": false, 00:08:20.476 "num_base_bdevs": 2, 00:08:20.476 "num_base_bdevs_discovered": 2, 00:08:20.476 "num_base_bdevs_operational": 2, 00:08:20.476 "base_bdevs_list": [ 00:08:20.476 { 00:08:20.476 "name": "BaseBdev1", 00:08:20.476 "uuid": "a802c07e-543b-4ed1-b3cc-48307836cfe7", 00:08:20.476 "is_configured": true, 00:08:20.476 "data_offset": 0, 00:08:20.476 "data_size": 65536 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "name": "BaseBdev2", 00:08:20.476 "uuid": "1da2c445-9d80-424e-9f23-61d69c8b3de4", 00:08:20.476 "is_configured": true, 00:08:20.476 "data_offset": 0, 00:08:20.476 "data_size": 65536 00:08:20.476 } 00:08:20.476 ] 00:08:20.476 } 00:08:20.476 } 00:08:20.476 }' 00:08:20.476 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.476 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:20.476 BaseBdev2' 00:08:20.476 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 [2024-11-28 18:48:50.212218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.785 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.786 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.786 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.786 "name": "Existed_Raid", 00:08:20.786 "uuid": "83411c4d-b042-4284-8407-da8ad716a910", 00:08:20.786 "strip_size_kb": 0, 00:08:20.786 "state": "online", 00:08:20.786 "raid_level": "raid1", 00:08:20.786 "superblock": false, 00:08:20.786 "num_base_bdevs": 2, 00:08:20.786 "num_base_bdevs_discovered": 1, 00:08:20.786 "num_base_bdevs_operational": 1, 00:08:20.786 "base_bdevs_list": [ 00:08:20.786 { 00:08:20.786 "name": null, 00:08:20.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.786 "is_configured": false, 00:08:20.786 "data_offset": 0, 00:08:20.786 "data_size": 65536 00:08:20.786 }, 00:08:20.786 { 00:08:20.786 "name": "BaseBdev2", 00:08:20.786 "uuid": "1da2c445-9d80-424e-9f23-61d69c8b3de4", 00:08:20.786 "is_configured": true, 00:08:20.786 "data_offset": 0, 00:08:20.786 "data_size": 65536 00:08:20.786 } 00:08:20.786 ] 00:08:20.786 }' 00:08:20.786 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.786 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.370 [2024-11-28 18:48:50.731670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.370 [2024-11-28 18:48:50.731773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.370 [2024-11-28 18:48:50.743363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.370 [2024-11-28 18:48:50.743414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.370 [2024-11-28 18:48:50.743440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75528 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75528 ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75528 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75528 00:08:21.370 killing process with pid 75528 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75528' 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75528 00:08:21.370 [2024-11-28 18:48:50.821162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.370 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75528 00:08:21.370 [2024-11-28 18:48:50.822110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.629 00:08:21.629 real 0m3.850s 00:08:21.629 user 0m6.112s 00:08:21.629 sys 0m0.741s 00:08:21.629 ************************************ 00:08:21.629 END TEST raid_state_function_test 00:08:21.629 ************************************ 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.629 18:48:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:21.629 18:48:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.629 18:48:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.629 18:48:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.629 ************************************ 00:08:21.629 START TEST raid_state_function_test_sb 00:08:21.629 ************************************ 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.629 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75770 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75770' 00:08:21.630 Process raid pid: 75770 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75770 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75770 ']' 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.630 18:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.630 [2024-11-28 18:48:51.206315] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:21.630 [2024-11-28 18:48:51.206465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.889 [2024-11-28 18:48:51.342507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.889 [2024-11-28 18:48:51.378808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.889 [2024-11-28 18:48:51.403615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.889 [2024-11-28 18:48:51.445410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.889 [2024-11-28 18:48:51.445450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.458 [2024-11-28 18:48:52.020720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.458 [2024-11-28 18:48:52.020771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.458 [2024-11-28 18:48:52.020792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.458 [2024-11-28 18:48:52.020801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.458 "name": "Existed_Raid", 00:08:22.458 "uuid": "3b6158c2-3396-4c47-a05e-b69217e56320", 00:08:22.458 "strip_size_kb": 0, 00:08:22.458 "state": "configuring", 00:08:22.458 "raid_level": "raid1", 00:08:22.458 "superblock": true, 00:08:22.458 "num_base_bdevs": 2, 00:08:22.458 "num_base_bdevs_discovered": 0, 00:08:22.458 "num_base_bdevs_operational": 2, 00:08:22.458 "base_bdevs_list": [ 00:08:22.458 { 00:08:22.458 "name": "BaseBdev1", 00:08:22.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.458 "is_configured": false, 00:08:22.458 "data_offset": 0, 00:08:22.458 "data_size": 0 00:08:22.458 }, 00:08:22.458 { 00:08:22.458 "name": "BaseBdev2", 00:08:22.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.458 "is_configured": false, 00:08:22.458 "data_offset": 0, 00:08:22.458 "data_size": 0 00:08:22.458 } 00:08:22.458 ] 00:08:22.458 }' 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.458 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.027 [2024-11-28 18:48:52.448726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.027 [2024-11-28 18:48:52.448768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.027 [2024-11-28 18:48:52.460762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.027 [2024-11-28 18:48:52.460802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.027 [2024-11-28 18:48:52.460812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.027 [2024-11-28 18:48:52.460835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.027 [2024-11-28 18:48:52.481528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.027 BaseBdev1 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:23.027 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.028 [ 00:08:23.028 { 00:08:23.028 "name": "BaseBdev1", 00:08:23.028 "aliases": [ 00:08:23.028 "b14143d9-5a8e-464d-802f-8c1e20d80e49" 00:08:23.028 ], 00:08:23.028 "product_name": "Malloc disk", 00:08:23.028 "block_size": 512, 00:08:23.028 "num_blocks": 65536, 00:08:23.028 "uuid": "b14143d9-5a8e-464d-802f-8c1e20d80e49", 00:08:23.028 "assigned_rate_limits": { 00:08:23.028 "rw_ios_per_sec": 0, 00:08:23.028 "rw_mbytes_per_sec": 0, 00:08:23.028 "r_mbytes_per_sec": 0, 00:08:23.028 "w_mbytes_per_sec": 0 00:08:23.028 }, 00:08:23.028 "claimed": true, 00:08:23.028 "claim_type": "exclusive_write", 00:08:23.028 "zoned": false, 00:08:23.028 "supported_io_types": { 00:08:23.028 "read": true, 00:08:23.028 "write": true, 00:08:23.028 "unmap": true, 00:08:23.028 "flush": true, 00:08:23.028 "reset": true, 00:08:23.028 "nvme_admin": false, 00:08:23.028 "nvme_io": false, 00:08:23.028 "nvme_io_md": false, 00:08:23.028 "write_zeroes": true, 00:08:23.028 "zcopy": true, 00:08:23.028 "get_zone_info": false, 00:08:23.028 "zone_management": false, 00:08:23.028 "zone_append": false, 00:08:23.028 "compare": false, 00:08:23.028 "compare_and_write": false, 00:08:23.028 "abort": true, 00:08:23.028 "seek_hole": false, 00:08:23.028 "seek_data": false, 00:08:23.028 "copy": true, 00:08:23.028 "nvme_iov_md": false 00:08:23.028 }, 00:08:23.028 "memory_domains": [ 00:08:23.028 { 00:08:23.028 "dma_device_id": "system", 00:08:23.028 "dma_device_type": 1 00:08:23.028 }, 00:08:23.028 { 00:08:23.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.028 "dma_device_type": 2 00:08:23.028 } 00:08:23.028 ], 00:08:23.028 "driver_specific": {} 00:08:23.028 } 00:08:23.028 ] 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.028 "name": "Existed_Raid", 00:08:23.028 "uuid": "34fb26d9-3b4b-4b6e-bb1b-a2160cb0899b", 00:08:23.028 "strip_size_kb": 0, 00:08:23.028 "state": "configuring", 00:08:23.028 "raid_level": "raid1", 00:08:23.028 "superblock": true, 00:08:23.028 "num_base_bdevs": 2, 00:08:23.028 "num_base_bdevs_discovered": 1, 00:08:23.028 "num_base_bdevs_operational": 2, 00:08:23.028 "base_bdevs_list": [ 00:08:23.028 { 00:08:23.028 "name": "BaseBdev1", 00:08:23.028 "uuid": "b14143d9-5a8e-464d-802f-8c1e20d80e49", 00:08:23.028 "is_configured": true, 00:08:23.028 "data_offset": 2048, 00:08:23.028 "data_size": 63488 00:08:23.028 }, 00:08:23.028 { 00:08:23.028 "name": "BaseBdev2", 00:08:23.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.028 "is_configured": false, 00:08:23.028 "data_offset": 0, 00:08:23.028 "data_size": 0 00:08:23.028 } 00:08:23.028 ] 00:08:23.028 }' 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.028 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.596 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.596 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 [2024-11-28 18:48:52.933666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.596 [2024-11-28 18:48:52.933793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:23.596 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.596 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.597 [2024-11-28 18:48:52.945702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.597 [2024-11-28 18:48:52.947587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.597 [2024-11-28 18:48:52.947664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.597 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.597 "name": "Existed_Raid", 00:08:23.597 "uuid": "aa228c68-711b-4209-8ffb-7f88a80dbfbf", 00:08:23.597 "strip_size_kb": 0, 00:08:23.597 "state": "configuring", 00:08:23.597 "raid_level": "raid1", 00:08:23.597 "superblock": true, 00:08:23.597 "num_base_bdevs": 2, 00:08:23.597 "num_base_bdevs_discovered": 1, 00:08:23.597 "num_base_bdevs_operational": 2, 00:08:23.597 "base_bdevs_list": [ 00:08:23.597 { 00:08:23.597 "name": "BaseBdev1", 00:08:23.597 "uuid": "b14143d9-5a8e-464d-802f-8c1e20d80e49", 00:08:23.597 "is_configured": true, 00:08:23.597 "data_offset": 2048, 00:08:23.597 "data_size": 63488 00:08:23.597 }, 00:08:23.597 { 00:08:23.597 "name": "BaseBdev2", 00:08:23.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.597 "is_configured": false, 00:08:23.597 "data_offset": 0, 00:08:23.597 "data_size": 0 00:08:23.597 } 00:08:23.597 ] 00:08:23.597 }' 00:08:23.597 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.597 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.857 [2024-11-28 18:48:53.396694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.857 [2024-11-28 18:48:53.396960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:23.857 [2024-11-28 18:48:53.397021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.857 BaseBdev2 00:08:23.857 [2024-11-28 18:48:53.397290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:23.857 [2024-11-28 18:48:53.397449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:23.857 [2024-11-28 18:48:53.397461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:23.857 [2024-11-28 18:48:53.397596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.857 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.857 [ 00:08:23.857 { 00:08:23.857 "name": "BaseBdev2", 00:08:23.857 "aliases": [ 00:08:23.857 "be471fb6-9036-4d1a-9609-d1be252e1a73" 00:08:23.857 ], 00:08:23.857 "product_name": "Malloc disk", 00:08:23.857 "block_size": 512, 00:08:23.857 "num_blocks": 65536, 00:08:23.857 "uuid": "be471fb6-9036-4d1a-9609-d1be252e1a73", 00:08:23.857 "assigned_rate_limits": { 00:08:23.857 "rw_ios_per_sec": 0, 00:08:23.857 "rw_mbytes_per_sec": 0, 00:08:23.857 "r_mbytes_per_sec": 0, 00:08:23.857 "w_mbytes_per_sec": 0 00:08:23.857 }, 00:08:23.857 "claimed": true, 00:08:23.857 "claim_type": "exclusive_write", 00:08:23.857 "zoned": false, 00:08:23.857 "supported_io_types": { 00:08:23.857 "read": true, 00:08:23.857 "write": true, 00:08:23.857 "unmap": true, 00:08:23.857 "flush": true, 00:08:23.857 "reset": true, 00:08:23.857 "nvme_admin": false, 00:08:23.857 "nvme_io": false, 00:08:23.857 "nvme_io_md": false, 00:08:23.857 "write_zeroes": true, 00:08:23.857 "zcopy": true, 00:08:23.857 "get_zone_info": false, 00:08:23.857 "zone_management": false, 00:08:23.858 "zone_append": false, 00:08:23.858 "compare": false, 00:08:23.858 "compare_and_write": false, 00:08:23.858 "abort": true, 00:08:23.858 "seek_hole": false, 00:08:23.858 "seek_data": false, 00:08:23.858 "copy": true, 00:08:23.858 "nvme_iov_md": false 00:08:23.858 }, 00:08:23.858 "memory_domains": [ 00:08:23.858 { 00:08:23.858 "dma_device_id": "system", 00:08:23.858 "dma_device_type": 1 00:08:23.858 }, 00:08:23.858 { 00:08:23.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.858 "dma_device_type": 2 00:08:23.858 } 00:08:23.858 ], 00:08:23.858 "driver_specific": {} 00:08:23.858 } 00:08:23.858 ] 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.858 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.117 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.117 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.117 "name": "Existed_Raid", 00:08:24.117 "uuid": "aa228c68-711b-4209-8ffb-7f88a80dbfbf", 00:08:24.117 "strip_size_kb": 0, 00:08:24.117 "state": "online", 00:08:24.117 "raid_level": "raid1", 00:08:24.117 "superblock": true, 00:08:24.117 "num_base_bdevs": 2, 00:08:24.117 "num_base_bdevs_discovered": 2, 00:08:24.117 "num_base_bdevs_operational": 2, 00:08:24.117 "base_bdevs_list": [ 00:08:24.117 { 00:08:24.117 "name": "BaseBdev1", 00:08:24.117 "uuid": "b14143d9-5a8e-464d-802f-8c1e20d80e49", 00:08:24.117 "is_configured": true, 00:08:24.117 "data_offset": 2048, 00:08:24.117 "data_size": 63488 00:08:24.117 }, 00:08:24.117 { 00:08:24.117 "name": "BaseBdev2", 00:08:24.117 "uuid": "be471fb6-9036-4d1a-9609-d1be252e1a73", 00:08:24.117 "is_configured": true, 00:08:24.117 "data_offset": 2048, 00:08:24.117 "data_size": 63488 00:08:24.117 } 00:08:24.117 ] 00:08:24.117 }' 00:08:24.117 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.117 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.376 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.377 [2024-11-28 18:48:53.901151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.377 "name": "Existed_Raid", 00:08:24.377 "aliases": [ 00:08:24.377 "aa228c68-711b-4209-8ffb-7f88a80dbfbf" 00:08:24.377 ], 00:08:24.377 "product_name": "Raid Volume", 00:08:24.377 "block_size": 512, 00:08:24.377 "num_blocks": 63488, 00:08:24.377 "uuid": "aa228c68-711b-4209-8ffb-7f88a80dbfbf", 00:08:24.377 "assigned_rate_limits": { 00:08:24.377 "rw_ios_per_sec": 0, 00:08:24.377 "rw_mbytes_per_sec": 0, 00:08:24.377 "r_mbytes_per_sec": 0, 00:08:24.377 "w_mbytes_per_sec": 0 00:08:24.377 }, 00:08:24.377 "claimed": false, 00:08:24.377 "zoned": false, 00:08:24.377 "supported_io_types": { 00:08:24.377 "read": true, 00:08:24.377 "write": true, 00:08:24.377 "unmap": false, 00:08:24.377 "flush": false, 00:08:24.377 "reset": true, 00:08:24.377 "nvme_admin": false, 00:08:24.377 "nvme_io": false, 00:08:24.377 "nvme_io_md": false, 00:08:24.377 "write_zeroes": true, 00:08:24.377 "zcopy": false, 00:08:24.377 "get_zone_info": false, 00:08:24.377 "zone_management": false, 00:08:24.377 "zone_append": false, 00:08:24.377 "compare": false, 00:08:24.377 "compare_and_write": false, 00:08:24.377 "abort": false, 00:08:24.377 "seek_hole": false, 00:08:24.377 "seek_data": false, 00:08:24.377 "copy": false, 00:08:24.377 "nvme_iov_md": false 00:08:24.377 }, 00:08:24.377 "memory_domains": [ 00:08:24.377 { 00:08:24.377 "dma_device_id": "system", 00:08:24.377 "dma_device_type": 1 00:08:24.377 }, 00:08:24.377 { 00:08:24.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.377 "dma_device_type": 2 00:08:24.377 }, 00:08:24.377 { 00:08:24.377 "dma_device_id": "system", 00:08:24.377 "dma_device_type": 1 00:08:24.377 }, 00:08:24.377 { 00:08:24.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.377 "dma_device_type": 2 00:08:24.377 } 00:08:24.377 ], 00:08:24.377 "driver_specific": { 00:08:24.377 "raid": { 00:08:24.377 "uuid": "aa228c68-711b-4209-8ffb-7f88a80dbfbf", 00:08:24.377 "strip_size_kb": 0, 00:08:24.377 "state": "online", 00:08:24.377 "raid_level": "raid1", 00:08:24.377 "superblock": true, 00:08:24.377 "num_base_bdevs": 2, 00:08:24.377 "num_base_bdevs_discovered": 2, 00:08:24.377 "num_base_bdevs_operational": 2, 00:08:24.377 "base_bdevs_list": [ 00:08:24.377 { 00:08:24.377 "name": "BaseBdev1", 00:08:24.377 "uuid": "b14143d9-5a8e-464d-802f-8c1e20d80e49", 00:08:24.377 "is_configured": true, 00:08:24.377 "data_offset": 2048, 00:08:24.377 "data_size": 63488 00:08:24.377 }, 00:08:24.377 { 00:08:24.377 "name": "BaseBdev2", 00:08:24.377 "uuid": "be471fb6-9036-4d1a-9609-d1be252e1a73", 00:08:24.377 "is_configured": true, 00:08:24.377 "data_offset": 2048, 00:08:24.377 "data_size": 63488 00:08:24.377 } 00:08:24.377 ] 00:08:24.377 } 00:08:24.377 } 00:08:24.377 }' 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.377 BaseBdev2' 00:08:24.377 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.636 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 [2024-11-28 18:48:54.096985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.636 "name": "Existed_Raid", 00:08:24.636 "uuid": "aa228c68-711b-4209-8ffb-7f88a80dbfbf", 00:08:24.636 "strip_size_kb": 0, 00:08:24.636 "state": "online", 00:08:24.636 "raid_level": "raid1", 00:08:24.636 "superblock": true, 00:08:24.636 "num_base_bdevs": 2, 00:08:24.636 "num_base_bdevs_discovered": 1, 00:08:24.636 "num_base_bdevs_operational": 1, 00:08:24.636 "base_bdevs_list": [ 00:08:24.636 { 00:08:24.636 "name": null, 00:08:24.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.636 "is_configured": false, 00:08:24.636 "data_offset": 0, 00:08:24.636 "data_size": 63488 00:08:24.636 }, 00:08:24.636 { 00:08:24.636 "name": "BaseBdev2", 00:08:24.636 "uuid": "be471fb6-9036-4d1a-9609-d1be252e1a73", 00:08:24.636 "is_configured": true, 00:08:24.636 "data_offset": 2048, 00:08:24.636 "data_size": 63488 00:08:24.636 } 00:08:24.636 ] 00:08:24.636 }' 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.636 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.206 [2024-11-28 18:48:54.636538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.206 [2024-11-28 18:48:54.636637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.206 [2024-11-28 18:48:54.648125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.206 [2024-11-28 18:48:54.648182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.206 [2024-11-28 18:48:54.648197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75770 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75770 ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75770 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75770 00:08:25.206 killing process with pid 75770 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75770' 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75770 00:08:25.206 [2024-11-28 18:48:54.747728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.206 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75770 00:08:25.206 [2024-11-28 18:48:54.748705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.465 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.465 00:08:25.465 real 0m3.850s 00:08:25.465 user 0m6.055s 00:08:25.465 sys 0m0.792s 00:08:25.465 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.465 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.465 ************************************ 00:08:25.465 END TEST raid_state_function_test_sb 00:08:25.465 ************************************ 00:08:25.465 18:48:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:25.465 18:48:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:25.465 18:48:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.465 18:48:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.465 ************************************ 00:08:25.465 START TEST raid_superblock_test 00:08:25.465 ************************************ 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76005 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76005 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76005 ']' 00:08:25.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.465 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.725 [2024-11-28 18:48:55.120893] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:25.725 [2024-11-28 18:48:55.121003] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76005 ] 00:08:25.725 [2024-11-28 18:48:55.254990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.725 [2024-11-28 18:48:55.292950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.725 [2024-11-28 18:48:55.317588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.984 [2024-11-28 18:48:55.359348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.984 [2024-11-28 18:48:55.359482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 malloc1 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 [2024-11-28 18:48:55.955269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.555 [2024-11-28 18:48:55.955374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.555 [2024-11-28 18:48:55.955415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.555 [2024-11-28 18:48:55.955463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.555 [2024-11-28 18:48:55.957576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.555 [2024-11-28 18:48:55.957647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.555 pt1 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 malloc2 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 [2024-11-28 18:48:55.987646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.555 [2024-11-28 18:48:55.987747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.555 [2024-11-28 18:48:55.987782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.555 [2024-11-28 18:48:55.987809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.555 [2024-11-28 18:48:55.989833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.555 [2024-11-28 18:48:55.989898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.555 pt2 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.555 18:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 [2024-11-28 18:48:55.999674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.555 [2024-11-28 18:48:56.001472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.555 [2024-11-28 18:48:56.001618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:26.555 [2024-11-28 18:48:56.001636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.555 [2024-11-28 18:48:56.001899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:26.555 [2024-11-28 18:48:56.002043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:26.555 [2024-11-28 18:48:56.002055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:26.555 [2024-11-28 18:48:56.002166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.555 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.556 "name": "raid_bdev1", 00:08:26.556 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:26.556 "strip_size_kb": 0, 00:08:26.556 "state": "online", 00:08:26.556 "raid_level": "raid1", 00:08:26.556 "superblock": true, 00:08:26.556 "num_base_bdevs": 2, 00:08:26.556 "num_base_bdevs_discovered": 2, 00:08:26.556 "num_base_bdevs_operational": 2, 00:08:26.556 "base_bdevs_list": [ 00:08:26.556 { 00:08:26.556 "name": "pt1", 00:08:26.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.556 "is_configured": true, 00:08:26.556 "data_offset": 2048, 00:08:26.556 "data_size": 63488 00:08:26.556 }, 00:08:26.556 { 00:08:26.556 "name": "pt2", 00:08:26.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.556 "is_configured": true, 00:08:26.556 "data_offset": 2048, 00:08:26.556 "data_size": 63488 00:08:26.556 } 00:08:26.556 ] 00:08:26.556 }' 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.556 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.126 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.126 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.126 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.126 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.127 [2024-11-28 18:48:56.448102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.127 "name": "raid_bdev1", 00:08:27.127 "aliases": [ 00:08:27.127 "d91bad46-6b7b-4038-942d-cb1e2c3de123" 00:08:27.127 ], 00:08:27.127 "product_name": "Raid Volume", 00:08:27.127 "block_size": 512, 00:08:27.127 "num_blocks": 63488, 00:08:27.127 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:27.127 "assigned_rate_limits": { 00:08:27.127 "rw_ios_per_sec": 0, 00:08:27.127 "rw_mbytes_per_sec": 0, 00:08:27.127 "r_mbytes_per_sec": 0, 00:08:27.127 "w_mbytes_per_sec": 0 00:08:27.127 }, 00:08:27.127 "claimed": false, 00:08:27.127 "zoned": false, 00:08:27.127 "supported_io_types": { 00:08:27.127 "read": true, 00:08:27.127 "write": true, 00:08:27.127 "unmap": false, 00:08:27.127 "flush": false, 00:08:27.127 "reset": true, 00:08:27.127 "nvme_admin": false, 00:08:27.127 "nvme_io": false, 00:08:27.127 "nvme_io_md": false, 00:08:27.127 "write_zeroes": true, 00:08:27.127 "zcopy": false, 00:08:27.127 "get_zone_info": false, 00:08:27.127 "zone_management": false, 00:08:27.127 "zone_append": false, 00:08:27.127 "compare": false, 00:08:27.127 "compare_and_write": false, 00:08:27.127 "abort": false, 00:08:27.127 "seek_hole": false, 00:08:27.127 "seek_data": false, 00:08:27.127 "copy": false, 00:08:27.127 "nvme_iov_md": false 00:08:27.127 }, 00:08:27.127 "memory_domains": [ 00:08:27.127 { 00:08:27.127 "dma_device_id": "system", 00:08:27.127 "dma_device_type": 1 00:08:27.127 }, 00:08:27.127 { 00:08:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.127 "dma_device_type": 2 00:08:27.127 }, 00:08:27.127 { 00:08:27.127 "dma_device_id": "system", 00:08:27.127 "dma_device_type": 1 00:08:27.127 }, 00:08:27.127 { 00:08:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.127 "dma_device_type": 2 00:08:27.127 } 00:08:27.127 ], 00:08:27.127 "driver_specific": { 00:08:27.127 "raid": { 00:08:27.127 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:27.127 "strip_size_kb": 0, 00:08:27.127 "state": "online", 00:08:27.127 "raid_level": "raid1", 00:08:27.127 "superblock": true, 00:08:27.127 "num_base_bdevs": 2, 00:08:27.127 "num_base_bdevs_discovered": 2, 00:08:27.127 "num_base_bdevs_operational": 2, 00:08:27.127 "base_bdevs_list": [ 00:08:27.127 { 00:08:27.127 "name": "pt1", 00:08:27.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.127 "is_configured": true, 00:08:27.127 "data_offset": 2048, 00:08:27.127 "data_size": 63488 00:08:27.127 }, 00:08:27.127 { 00:08:27.127 "name": "pt2", 00:08:27.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.127 "is_configured": true, 00:08:27.127 "data_offset": 2048, 00:08:27.127 "data_size": 63488 00:08:27.127 } 00:08:27.127 ] 00:08:27.127 } 00:08:27.127 } 00:08:27.127 }' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.127 pt2' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:27.127 [2024-11-28 18:48:56.688046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d91bad46-6b7b-4038-942d-cb1e2c3de123 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d91bad46-6b7b-4038-942d-cb1e2c3de123 ']' 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.127 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.388 [2024-11-28 18:48:56.731843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.388 [2024-11-28 18:48:56.731866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.388 [2024-11-28 18:48:56.731933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.388 [2024-11-28 18:48:56.731996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.388 [2024-11-28 18:48:56.732009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.388 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 [2024-11-28 18:48:56.875890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.389 [2024-11-28 18:48:56.877764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.389 [2024-11-28 18:48:56.877824] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.389 [2024-11-28 18:48:56.877867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.389 [2024-11-28 18:48:56.877881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.389 [2024-11-28 18:48:56.877890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:27.389 request: 00:08:27.389 { 00:08:27.389 "name": "raid_bdev1", 00:08:27.389 "raid_level": "raid1", 00:08:27.389 "base_bdevs": [ 00:08:27.389 "malloc1", 00:08:27.389 "malloc2" 00:08:27.389 ], 00:08:27.389 "superblock": false, 00:08:27.389 "method": "bdev_raid_create", 00:08:27.389 "req_id": 1 00:08:27.389 } 00:08:27.389 Got JSON-RPC error response 00:08:27.389 response: 00:08:27.389 { 00:08:27.389 "code": -17, 00:08:27.389 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.389 } 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 [2024-11-28 18:48:56.923888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.389 [2024-11-28 18:48:56.923975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.389 [2024-11-28 18:48:56.924025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.389 [2024-11-28 18:48:56.924057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.389 [2024-11-28 18:48:56.926145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.389 [2024-11-28 18:48:56.926214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.389 [2024-11-28 18:48:56.926313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.389 [2024-11-28 18:48:56.926367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.389 pt1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.389 "name": "raid_bdev1", 00:08:27.389 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:27.389 "strip_size_kb": 0, 00:08:27.389 "state": "configuring", 00:08:27.389 "raid_level": "raid1", 00:08:27.389 "superblock": true, 00:08:27.389 "num_base_bdevs": 2, 00:08:27.389 "num_base_bdevs_discovered": 1, 00:08:27.389 "num_base_bdevs_operational": 2, 00:08:27.389 "base_bdevs_list": [ 00:08:27.389 { 00:08:27.389 "name": "pt1", 00:08:27.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.389 "is_configured": true, 00:08:27.389 "data_offset": 2048, 00:08:27.389 "data_size": 63488 00:08:27.389 }, 00:08:27.389 { 00:08:27.389 "name": null, 00:08:27.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.389 "is_configured": false, 00:08:27.389 "data_offset": 2048, 00:08:27.389 "data_size": 63488 00:08:27.389 } 00:08:27.389 ] 00:08:27.389 }' 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.389 18:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 [2024-11-28 18:48:57.380025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.959 [2024-11-28 18:48:57.380091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.959 [2024-11-28 18:48:57.380113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:27.959 [2024-11-28 18:48:57.380123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.959 [2024-11-28 18:48:57.380518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.959 [2024-11-28 18:48:57.380539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.959 [2024-11-28 18:48:57.380606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:27.959 [2024-11-28 18:48:57.380629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.959 [2024-11-28 18:48:57.380726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:27.959 [2024-11-28 18:48:57.380743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.959 [2024-11-28 18:48:57.380982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.959 [2024-11-28 18:48:57.381107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:27.959 [2024-11-28 18:48:57.381115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:27.959 [2024-11-28 18:48:57.381219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.959 pt2 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.959 "name": "raid_bdev1", 00:08:27.959 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:27.959 "strip_size_kb": 0, 00:08:27.959 "state": "online", 00:08:27.959 "raid_level": "raid1", 00:08:27.959 "superblock": true, 00:08:27.959 "num_base_bdevs": 2, 00:08:27.959 "num_base_bdevs_discovered": 2, 00:08:27.959 "num_base_bdevs_operational": 2, 00:08:27.959 "base_bdevs_list": [ 00:08:27.959 { 00:08:27.959 "name": "pt1", 00:08:27.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.959 "is_configured": true, 00:08:27.959 "data_offset": 2048, 00:08:27.959 "data_size": 63488 00:08:27.959 }, 00:08:27.959 { 00:08:27.959 "name": "pt2", 00:08:27.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.959 "is_configured": true, 00:08:27.959 "data_offset": 2048, 00:08:27.959 "data_size": 63488 00:08:27.959 } 00:08:27.959 ] 00:08:27.959 }' 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.959 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.218 [2024-11-28 18:48:57.776360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.218 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.218 "name": "raid_bdev1", 00:08:28.218 "aliases": [ 00:08:28.218 "d91bad46-6b7b-4038-942d-cb1e2c3de123" 00:08:28.218 ], 00:08:28.218 "product_name": "Raid Volume", 00:08:28.218 "block_size": 512, 00:08:28.218 "num_blocks": 63488, 00:08:28.218 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:28.218 "assigned_rate_limits": { 00:08:28.218 "rw_ios_per_sec": 0, 00:08:28.218 "rw_mbytes_per_sec": 0, 00:08:28.218 "r_mbytes_per_sec": 0, 00:08:28.218 "w_mbytes_per_sec": 0 00:08:28.218 }, 00:08:28.218 "claimed": false, 00:08:28.218 "zoned": false, 00:08:28.218 "supported_io_types": { 00:08:28.218 "read": true, 00:08:28.218 "write": true, 00:08:28.218 "unmap": false, 00:08:28.218 "flush": false, 00:08:28.218 "reset": true, 00:08:28.218 "nvme_admin": false, 00:08:28.218 "nvme_io": false, 00:08:28.218 "nvme_io_md": false, 00:08:28.218 "write_zeroes": true, 00:08:28.218 "zcopy": false, 00:08:28.218 "get_zone_info": false, 00:08:28.218 "zone_management": false, 00:08:28.218 "zone_append": false, 00:08:28.218 "compare": false, 00:08:28.218 "compare_and_write": false, 00:08:28.218 "abort": false, 00:08:28.218 "seek_hole": false, 00:08:28.218 "seek_data": false, 00:08:28.218 "copy": false, 00:08:28.218 "nvme_iov_md": false 00:08:28.218 }, 00:08:28.218 "memory_domains": [ 00:08:28.218 { 00:08:28.218 "dma_device_id": "system", 00:08:28.218 "dma_device_type": 1 00:08:28.218 }, 00:08:28.218 { 00:08:28.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.218 "dma_device_type": 2 00:08:28.218 }, 00:08:28.218 { 00:08:28.218 "dma_device_id": "system", 00:08:28.218 "dma_device_type": 1 00:08:28.218 }, 00:08:28.218 { 00:08:28.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.218 "dma_device_type": 2 00:08:28.218 } 00:08:28.218 ], 00:08:28.218 "driver_specific": { 00:08:28.218 "raid": { 00:08:28.218 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:28.218 "strip_size_kb": 0, 00:08:28.218 "state": "online", 00:08:28.218 "raid_level": "raid1", 00:08:28.218 "superblock": true, 00:08:28.218 "num_base_bdevs": 2, 00:08:28.218 "num_base_bdevs_discovered": 2, 00:08:28.218 "num_base_bdevs_operational": 2, 00:08:28.218 "base_bdevs_list": [ 00:08:28.218 { 00:08:28.218 "name": "pt1", 00:08:28.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.218 "is_configured": true, 00:08:28.218 "data_offset": 2048, 00:08:28.218 "data_size": 63488 00:08:28.218 }, 00:08:28.218 { 00:08:28.218 "name": "pt2", 00:08:28.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.219 "is_configured": true, 00:08:28.219 "data_offset": 2048, 00:08:28.219 "data_size": 63488 00:08:28.219 } 00:08:28.219 ] 00:08:28.219 } 00:08:28.219 } 00:08:28.219 }' 00:08:28.219 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.479 pt2' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.479 18:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 [2024-11-28 18:48:58.004448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d91bad46-6b7b-4038-942d-cb1e2c3de123 '!=' d91bad46-6b7b-4038-942d-cb1e2c3de123 ']' 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 [2024-11-28 18:48:58.048233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.479 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.739 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.739 "name": "raid_bdev1", 00:08:28.739 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:28.739 "strip_size_kb": 0, 00:08:28.739 "state": "online", 00:08:28.739 "raid_level": "raid1", 00:08:28.739 "superblock": true, 00:08:28.739 "num_base_bdevs": 2, 00:08:28.739 "num_base_bdevs_discovered": 1, 00:08:28.739 "num_base_bdevs_operational": 1, 00:08:28.739 "base_bdevs_list": [ 00:08:28.739 { 00:08:28.739 "name": null, 00:08:28.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.739 "is_configured": false, 00:08:28.739 "data_offset": 0, 00:08:28.739 "data_size": 63488 00:08:28.739 }, 00:08:28.739 { 00:08:28.739 "name": "pt2", 00:08:28.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.739 "is_configured": true, 00:08:28.739 "data_offset": 2048, 00:08:28.739 "data_size": 63488 00:08:28.739 } 00:08:28.739 ] 00:08:28.739 }' 00:08:28.739 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.739 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.998 [2024-11-28 18:48:58.512338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.998 [2024-11-28 18:48:58.512412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.998 [2024-11-28 18:48:58.512527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.998 [2024-11-28 18:48:58.512611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.998 [2024-11-28 18:48:58.512646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.998 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.998 [2024-11-28 18:48:58.564327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:28.998 [2024-11-28 18:48:58.564383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.998 [2024-11-28 18:48:58.564400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:28.999 [2024-11-28 18:48:58.564410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.999 [2024-11-28 18:48:58.566591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.999 [2024-11-28 18:48:58.566629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:28.999 [2024-11-28 18:48:58.566708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:28.999 [2024-11-28 18:48:58.566740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:28.999 [2024-11-28 18:48:58.566822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.999 [2024-11-28 18:48:58.566837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.999 [2024-11-28 18:48:58.567056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:28.999 [2024-11-28 18:48:58.567210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.999 [2024-11-28 18:48:58.567220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:28.999 [2024-11-28 18:48:58.567324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.999 pt2 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.999 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.258 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.258 "name": "raid_bdev1", 00:08:29.258 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:29.258 "strip_size_kb": 0, 00:08:29.258 "state": "online", 00:08:29.258 "raid_level": "raid1", 00:08:29.259 "superblock": true, 00:08:29.259 "num_base_bdevs": 2, 00:08:29.259 "num_base_bdevs_discovered": 1, 00:08:29.259 "num_base_bdevs_operational": 1, 00:08:29.259 "base_bdevs_list": [ 00:08:29.259 { 00:08:29.259 "name": null, 00:08:29.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.259 "is_configured": false, 00:08:29.259 "data_offset": 2048, 00:08:29.259 "data_size": 63488 00:08:29.259 }, 00:08:29.259 { 00:08:29.259 "name": "pt2", 00:08:29.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.259 "is_configured": true, 00:08:29.259 "data_offset": 2048, 00:08:29.259 "data_size": 63488 00:08:29.259 } 00:08:29.259 ] 00:08:29.259 }' 00:08:29.259 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.259 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.519 [2024-11-28 18:48:58.992461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.519 [2024-11-28 18:48:58.992533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.519 [2024-11-28 18:48:58.992613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.519 [2024-11-28 18:48:58.992672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.519 [2024-11-28 18:48:58.992705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.519 18:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.519 [2024-11-28 18:48:59.052461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.519 [2024-11-28 18:48:59.052547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.519 [2024-11-28 18:48:59.052584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:29.519 [2024-11-28 18:48:59.052609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.519 [2024-11-28 18:48:59.054763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.519 [2024-11-28 18:48:59.054829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.519 [2024-11-28 18:48:59.054919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:29.519 [2024-11-28 18:48:59.054980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.519 [2024-11-28 18:48:59.055113] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:29.519 [2024-11-28 18:48:59.055191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.519 [2024-11-28 18:48:59.055255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:08:29.519 [2024-11-28 18:48:59.055318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.519 [2024-11-28 18:48:59.055419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:29.519 [2024-11-28 18:48:59.055469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.519 [2024-11-28 18:48:59.055715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:29.519 [2024-11-28 18:48:59.055875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:29.519 [2024-11-28 18:48:59.055921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:29.519 [2024-11-28 18:48:59.056074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.519 pt1 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.519 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.519 "name": "raid_bdev1", 00:08:29.519 "uuid": "d91bad46-6b7b-4038-942d-cb1e2c3de123", 00:08:29.519 "strip_size_kb": 0, 00:08:29.519 "state": "online", 00:08:29.519 "raid_level": "raid1", 00:08:29.519 "superblock": true, 00:08:29.519 "num_base_bdevs": 2, 00:08:29.519 "num_base_bdevs_discovered": 1, 00:08:29.519 "num_base_bdevs_operational": 1, 00:08:29.519 "base_bdevs_list": [ 00:08:29.519 { 00:08:29.519 "name": null, 00:08:29.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.519 "is_configured": false, 00:08:29.519 "data_offset": 2048, 00:08:29.519 "data_size": 63488 00:08:29.519 }, 00:08:29.519 { 00:08:29.520 "name": "pt2", 00:08:29.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.520 "is_configured": true, 00:08:29.520 "data_offset": 2048, 00:08:29.520 "data_size": 63488 00:08:29.520 } 00:08:29.520 ] 00:08:29.520 }' 00:08:29.520 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.520 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.091 [2024-11-28 18:48:59.492816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d91bad46-6b7b-4038-942d-cb1e2c3de123 '!=' d91bad46-6b7b-4038-942d-cb1e2c3de123 ']' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76005 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76005 ']' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76005 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76005 00:08:30.091 killing process with pid 76005 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76005' 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76005 00:08:30.091 [2024-11-28 18:48:59.556862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.091 [2024-11-28 18:48:59.556939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.091 [2024-11-28 18:48:59.556985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.091 [2024-11-28 18:48:59.556996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:30.091 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76005 00:08:30.091 [2024-11-28 18:48:59.579266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.351 18:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:30.351 00:08:30.351 real 0m4.763s 00:08:30.351 user 0m7.811s 00:08:30.351 sys 0m0.906s 00:08:30.351 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.351 18:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.351 ************************************ 00:08:30.351 END TEST raid_superblock_test 00:08:30.351 ************************************ 00:08:30.351 18:48:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:30.351 18:48:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.351 18:48:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.351 18:48:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.351 ************************************ 00:08:30.351 START TEST raid_read_error_test 00:08:30.351 ************************************ 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fi36tKMGMj 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76319 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76319 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76319 ']' 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.351 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.611 [2024-11-28 18:48:59.972179] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:30.611 [2024-11-28 18:48:59.972383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76319 ] 00:08:30.611 [2024-11-28 18:49:00.106635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.611 [2024-11-28 18:49:00.143902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.611 [2024-11-28 18:49:00.168471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.611 [2024-11-28 18:49:00.210111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.611 [2024-11-28 18:49:00.210233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.180 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 BaseBdev1_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 true 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 [2024-11-28 18:49:00.818670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.441 [2024-11-28 18:49:00.818731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.441 [2024-11-28 18:49:00.818767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.441 [2024-11-28 18:49:00.818779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.441 [2024-11-28 18:49:00.820893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.441 [2024-11-28 18:49:00.820988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.441 BaseBdev1 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 BaseBdev2_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 true 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 [2024-11-28 18:49:00.859093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.441 [2024-11-28 18:49:00.859203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.441 [2024-11-28 18:49:00.859222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.441 [2024-11-28 18:49:00.859232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.441 [2024-11-28 18:49:00.861298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.441 [2024-11-28 18:49:00.861346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.441 BaseBdev2 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 [2024-11-28 18:49:00.871134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.441 [2024-11-28 18:49:00.872979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.441 [2024-11-28 18:49:00.873139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:31.441 [2024-11-28 18:49:00.873154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:31.441 [2024-11-28 18:49:00.873408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:31.441 [2024-11-28 18:49:00.873574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:31.441 [2024-11-28 18:49:00.873585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:31.441 [2024-11-28 18:49:00.873696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.441 "name": "raid_bdev1", 00:08:31.441 "uuid": "65f0805e-e22d-4ba0-8dfd-bf6b2a90d9c2", 00:08:31.441 "strip_size_kb": 0, 00:08:31.441 "state": "online", 00:08:31.441 "raid_level": "raid1", 00:08:31.441 "superblock": true, 00:08:31.441 "num_base_bdevs": 2, 00:08:31.441 "num_base_bdevs_discovered": 2, 00:08:31.441 "num_base_bdevs_operational": 2, 00:08:31.441 "base_bdevs_list": [ 00:08:31.441 { 00:08:31.441 "name": "BaseBdev1", 00:08:31.441 "uuid": "ee064875-9853-52d7-89f8-e41dd83a583f", 00:08:31.441 "is_configured": true, 00:08:31.441 "data_offset": 2048, 00:08:31.441 "data_size": 63488 00:08:31.441 }, 00:08:31.441 { 00:08:31.441 "name": "BaseBdev2", 00:08:31.441 "uuid": "5d9a7190-1e44-5240-b8da-7f070b48200c", 00:08:31.441 "is_configured": true, 00:08:31.441 "data_offset": 2048, 00:08:31.441 "data_size": 63488 00:08:31.441 } 00:08:31.441 ] 00:08:31.441 }' 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.441 18:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.703 18:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.703 18:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:31.962 [2024-11-28 18:49:01.359706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:32.902 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.903 "name": "raid_bdev1", 00:08:32.903 "uuid": "65f0805e-e22d-4ba0-8dfd-bf6b2a90d9c2", 00:08:32.903 "strip_size_kb": 0, 00:08:32.903 "state": "online", 00:08:32.903 "raid_level": "raid1", 00:08:32.903 "superblock": true, 00:08:32.903 "num_base_bdevs": 2, 00:08:32.903 "num_base_bdevs_discovered": 2, 00:08:32.903 "num_base_bdevs_operational": 2, 00:08:32.903 "base_bdevs_list": [ 00:08:32.903 { 00:08:32.903 "name": "BaseBdev1", 00:08:32.903 "uuid": "ee064875-9853-52d7-89f8-e41dd83a583f", 00:08:32.903 "is_configured": true, 00:08:32.903 "data_offset": 2048, 00:08:32.903 "data_size": 63488 00:08:32.903 }, 00:08:32.903 { 00:08:32.903 "name": "BaseBdev2", 00:08:32.903 "uuid": "5d9a7190-1e44-5240-b8da-7f070b48200c", 00:08:32.903 "is_configured": true, 00:08:32.903 "data_offset": 2048, 00:08:32.903 "data_size": 63488 00:08:32.903 } 00:08:32.903 ] 00:08:32.903 }' 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.903 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.163 [2024-11-28 18:49:02.701825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.163 [2024-11-28 18:49:02.701861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.163 [2024-11-28 18:49:02.704632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.163 [2024-11-28 18:49:02.704716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.163 [2024-11-28 18:49:02.704837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.163 [2024-11-28 18:49:02.704885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:33.163 { 00:08:33.163 "results": [ 00:08:33.163 { 00:08:33.163 "job": "raid_bdev1", 00:08:33.163 "core_mask": "0x1", 00:08:33.163 "workload": "randrw", 00:08:33.163 "percentage": 50, 00:08:33.163 "status": "finished", 00:08:33.163 "queue_depth": 1, 00:08:33.163 "io_size": 131072, 00:08:33.163 "runtime": 1.340263, 00:08:33.163 "iops": 19878.93420918133, 00:08:33.163 "mibps": 2484.8667761476663, 00:08:33.163 "io_failed": 0, 00:08:33.163 "io_timeout": 0, 00:08:33.163 "avg_latency_us": 47.796133005111685, 00:08:33.163 "min_latency_us": 22.313257212586073, 00:08:33.163 "max_latency_us": 1370.9265231412883 00:08:33.163 } 00:08:33.163 ], 00:08:33.163 "core_count": 1 00:08:33.163 } 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76319 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76319 ']' 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76319 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76319 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76319' 00:08:33.163 killing process with pid 76319 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76319 00:08:33.163 [2024-11-28 18:49:02.743335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.163 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76319 00:08:33.163 [2024-11-28 18:49:02.758427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fi36tKMGMj 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:33.424 00:08:33.424 real 0m3.104s 00:08:33.424 user 0m3.910s 00:08:33.424 sys 0m0.508s 00:08:33.424 ************************************ 00:08:33.424 END TEST raid_read_error_test 00:08:33.424 ************************************ 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.424 18:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.684 18:49:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:33.684 18:49:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.684 18:49:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.684 18:49:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.684 ************************************ 00:08:33.684 START TEST raid_write_error_test 00:08:33.684 ************************************ 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vj02rwUCod 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76448 00:08:33.684 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76448 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76448 ']' 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.685 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.685 [2024-11-28 18:49:03.144585] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:33.685 [2024-11-28 18:49:03.144709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76448 ] 00:08:33.685 [2024-11-28 18:49:03.278871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.944 [2024-11-28 18:49:03.315300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.944 [2024-11-28 18:49:03.341245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.944 [2024-11-28 18:49:03.383684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.944 [2024-11-28 18:49:03.383721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 BaseBdev1_malloc 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 true 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 [2024-11-28 18:49:03.991755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:34.513 [2024-11-28 18:49:03.991811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.513 [2024-11-28 18:49:03.991826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:34.513 [2024-11-28 18:49:03.991838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.513 [2024-11-28 18:49:03.993888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.513 [2024-11-28 18:49:03.993930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:34.513 BaseBdev1 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 BaseBdev2_malloc 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 true 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 [2024-11-28 18:49:04.032169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:34.513 [2024-11-28 18:49:04.032219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.513 [2024-11-28 18:49:04.032250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:34.513 [2024-11-28 18:49:04.032259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.513 [2024-11-28 18:49:04.034264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.513 [2024-11-28 18:49:04.034301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:34.513 BaseBdev2 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.513 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.513 [2024-11-28 18:49:04.044197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.513 [2024-11-28 18:49:04.046020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.513 [2024-11-28 18:49:04.046182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:34.513 [2024-11-28 18:49:04.046196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.513 [2024-11-28 18:49:04.046423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:34.513 [2024-11-28 18:49:04.046605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:34.514 [2024-11-28 18:49:04.046615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:34.514 [2024-11-28 18:49:04.046747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.514 "name": "raid_bdev1", 00:08:34.514 "uuid": "085a9064-bd7f-4e61-b356-fef60d341fdb", 00:08:34.514 "strip_size_kb": 0, 00:08:34.514 "state": "online", 00:08:34.514 "raid_level": "raid1", 00:08:34.514 "superblock": true, 00:08:34.514 "num_base_bdevs": 2, 00:08:34.514 "num_base_bdevs_discovered": 2, 00:08:34.514 "num_base_bdevs_operational": 2, 00:08:34.514 "base_bdevs_list": [ 00:08:34.514 { 00:08:34.514 "name": "BaseBdev1", 00:08:34.514 "uuid": "9660b186-9058-5683-b744-05af524054dc", 00:08:34.514 "is_configured": true, 00:08:34.514 "data_offset": 2048, 00:08:34.514 "data_size": 63488 00:08:34.514 }, 00:08:34.514 { 00:08:34.514 "name": "BaseBdev2", 00:08:34.514 "uuid": "ab1bbb5e-d54c-565b-abb9-0f338d39741d", 00:08:34.514 "is_configured": true, 00:08:34.514 "data_offset": 2048, 00:08:34.514 "data_size": 63488 00:08:34.514 } 00:08:34.514 ] 00:08:34.514 }' 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.514 18:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.083 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.083 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:35.083 [2024-11-28 18:49:04.592697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.021 [2024-11-28 18:49:05.511308] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:36.021 [2024-11-28 18:49:05.511456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.021 [2024-11-28 18:49:05.511677] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000067d0 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.021 "name": "raid_bdev1", 00:08:36.021 "uuid": "085a9064-bd7f-4e61-b356-fef60d341fdb", 00:08:36.021 "strip_size_kb": 0, 00:08:36.021 "state": "online", 00:08:36.021 "raid_level": "raid1", 00:08:36.021 "superblock": true, 00:08:36.021 "num_base_bdevs": 2, 00:08:36.021 "num_base_bdevs_discovered": 1, 00:08:36.021 "num_base_bdevs_operational": 1, 00:08:36.021 "base_bdevs_list": [ 00:08:36.021 { 00:08:36.021 "name": null, 00:08:36.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.021 "is_configured": false, 00:08:36.021 "data_offset": 0, 00:08:36.021 "data_size": 63488 00:08:36.021 }, 00:08:36.021 { 00:08:36.021 "name": "BaseBdev2", 00:08:36.021 "uuid": "ab1bbb5e-d54c-565b-abb9-0f338d39741d", 00:08:36.021 "is_configured": true, 00:08:36.021 "data_offset": 2048, 00:08:36.021 "data_size": 63488 00:08:36.021 } 00:08:36.021 ] 00:08:36.021 }' 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.021 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.589 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.589 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.589 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.589 [2024-11-28 18:49:05.925578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.589 [2024-11-28 18:49:05.925611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.590 [2024-11-28 18:49:05.928170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.590 [2024-11-28 18:49:05.928228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.590 [2024-11-28 18:49:05.928281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.590 [2024-11-28 18:49:05.928290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:36.590 { 00:08:36.590 "results": [ 00:08:36.590 { 00:08:36.590 "job": "raid_bdev1", 00:08:36.590 "core_mask": "0x1", 00:08:36.590 "workload": "randrw", 00:08:36.590 "percentage": 50, 00:08:36.590 "status": "finished", 00:08:36.590 "queue_depth": 1, 00:08:36.590 "io_size": 131072, 00:08:36.590 "runtime": 1.331025, 00:08:36.590 "iops": 23277.549257151444, 00:08:36.590 "mibps": 2909.6936571439305, 00:08:36.590 "io_failed": 0, 00:08:36.590 "io_timeout": 0, 00:08:36.590 "avg_latency_us": 40.45882064999282, 00:08:36.590 "min_latency_us": 21.53229321014556, 00:08:36.590 "max_latency_us": 1435.188703913536 00:08:36.590 } 00:08:36.590 ], 00:08:36.590 "core_count": 1 00:08:36.590 } 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76448 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76448 ']' 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76448 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76448 00:08:36.590 killing process with pid 76448 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76448' 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76448 00:08:36.590 [2024-11-28 18:49:05.974972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.590 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76448 00:08:36.590 [2024-11-28 18:49:05.989600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vj02rwUCod 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:36.851 00:08:36.851 real 0m3.158s 00:08:36.851 user 0m4.021s 00:08:36.851 sys 0m0.493s 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.851 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.851 ************************************ 00:08:36.851 END TEST raid_write_error_test 00:08:36.851 ************************************ 00:08:36.851 18:49:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:36.851 18:49:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:36.851 18:49:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:36.851 18:49:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.851 18:49:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.851 18:49:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.851 ************************************ 00:08:36.851 START TEST raid_state_function_test 00:08:36.851 ************************************ 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76575 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76575' 00:08:36.851 Process raid pid: 76575 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76575 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76575 ']' 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.851 18:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.851 [2024-11-28 18:49:06.370231] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:36.851 [2024-11-28 18:49:06.370470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.111 [2024-11-28 18:49:06.506546] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:37.111 [2024-11-28 18:49:06.542923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.111 [2024-11-28 18:49:06.567872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.111 [2024-11-28 18:49:06.609423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.111 [2024-11-28 18:49:06.609461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.681 [2024-11-28 18:49:07.196819] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.681 [2024-11-28 18:49:07.196875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.681 [2024-11-28 18:49:07.196887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.681 [2024-11-28 18:49:07.196895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.681 [2024-11-28 18:49:07.196908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.681 [2024-11-28 18:49:07.196915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.681 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.681 "name": "Existed_Raid", 00:08:37.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.681 "strip_size_kb": 64, 00:08:37.681 "state": "configuring", 00:08:37.682 "raid_level": "raid0", 00:08:37.682 "superblock": false, 00:08:37.682 "num_base_bdevs": 3, 00:08:37.682 "num_base_bdevs_discovered": 0, 00:08:37.682 "num_base_bdevs_operational": 3, 00:08:37.682 "base_bdevs_list": [ 00:08:37.682 { 00:08:37.682 "name": "BaseBdev1", 00:08:37.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.682 "is_configured": false, 00:08:37.682 "data_offset": 0, 00:08:37.682 "data_size": 0 00:08:37.682 }, 00:08:37.682 { 00:08:37.682 "name": "BaseBdev2", 00:08:37.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.682 "is_configured": false, 00:08:37.682 "data_offset": 0, 00:08:37.682 "data_size": 0 00:08:37.682 }, 00:08:37.682 { 00:08:37.682 "name": "BaseBdev3", 00:08:37.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.682 "is_configured": false, 00:08:37.682 "data_offset": 0, 00:08:37.682 "data_size": 0 00:08:37.682 } 00:08:37.682 ] 00:08:37.682 }' 00:08:37.682 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.682 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 [2024-11-28 18:49:07.640839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.252 [2024-11-28 18:49:07.640923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 [2024-11-28 18:49:07.648888] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.252 [2024-11-28 18:49:07.648979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.252 [2024-11-28 18:49:07.649008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.252 [2024-11-28 18:49:07.649028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.252 [2024-11-28 18:49:07.649047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.252 [2024-11-28 18:49:07.649066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 [2024-11-28 18:49:07.669690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.252 BaseBdev1 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 [ 00:08:38.252 { 00:08:38.252 "name": "BaseBdev1", 00:08:38.252 "aliases": [ 00:08:38.252 "272131ac-b7d3-46f9-b672-f28b925ff5b7" 00:08:38.252 ], 00:08:38.252 "product_name": "Malloc disk", 00:08:38.252 "block_size": 512, 00:08:38.252 "num_blocks": 65536, 00:08:38.252 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:38.252 "assigned_rate_limits": { 00:08:38.252 "rw_ios_per_sec": 0, 00:08:38.252 "rw_mbytes_per_sec": 0, 00:08:38.252 "r_mbytes_per_sec": 0, 00:08:38.252 "w_mbytes_per_sec": 0 00:08:38.252 }, 00:08:38.252 "claimed": true, 00:08:38.252 "claim_type": "exclusive_write", 00:08:38.252 "zoned": false, 00:08:38.252 "supported_io_types": { 00:08:38.252 "read": true, 00:08:38.252 "write": true, 00:08:38.252 "unmap": true, 00:08:38.252 "flush": true, 00:08:38.252 "reset": true, 00:08:38.252 "nvme_admin": false, 00:08:38.252 "nvme_io": false, 00:08:38.252 "nvme_io_md": false, 00:08:38.252 "write_zeroes": true, 00:08:38.252 "zcopy": true, 00:08:38.252 "get_zone_info": false, 00:08:38.252 "zone_management": false, 00:08:38.252 "zone_append": false, 00:08:38.252 "compare": false, 00:08:38.252 "compare_and_write": false, 00:08:38.252 "abort": true, 00:08:38.252 "seek_hole": false, 00:08:38.252 "seek_data": false, 00:08:38.252 "copy": true, 00:08:38.252 "nvme_iov_md": false 00:08:38.252 }, 00:08:38.252 "memory_domains": [ 00:08:38.252 { 00:08:38.252 "dma_device_id": "system", 00:08:38.252 "dma_device_type": 1 00:08:38.252 }, 00:08:38.252 { 00:08:38.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.252 "dma_device_type": 2 00:08:38.252 } 00:08:38.252 ], 00:08:38.252 "driver_specific": {} 00:08:38.252 } 00:08:38.252 ] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.252 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.253 "name": "Existed_Raid", 00:08:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.253 "strip_size_kb": 64, 00:08:38.253 "state": "configuring", 00:08:38.253 "raid_level": "raid0", 00:08:38.253 "superblock": false, 00:08:38.253 "num_base_bdevs": 3, 00:08:38.253 "num_base_bdevs_discovered": 1, 00:08:38.253 "num_base_bdevs_operational": 3, 00:08:38.253 "base_bdevs_list": [ 00:08:38.253 { 00:08:38.253 "name": "BaseBdev1", 00:08:38.253 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:38.253 "is_configured": true, 00:08:38.253 "data_offset": 0, 00:08:38.253 "data_size": 65536 00:08:38.253 }, 00:08:38.253 { 00:08:38.253 "name": "BaseBdev2", 00:08:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.253 "is_configured": false, 00:08:38.253 "data_offset": 0, 00:08:38.253 "data_size": 0 00:08:38.253 }, 00:08:38.253 { 00:08:38.253 "name": "BaseBdev3", 00:08:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.253 "is_configured": false, 00:08:38.253 "data_offset": 0, 00:08:38.253 "data_size": 0 00:08:38.253 } 00:08:38.253 ] 00:08:38.253 }' 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.253 18:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 [2024-11-28 18:49:08.129854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.823 [2024-11-28 18:49:08.129901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 [2024-11-28 18:49:08.137899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.823 [2024-11-28 18:49:08.139729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.823 [2024-11-28 18:49:08.139812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.823 [2024-11-28 18:49:08.139830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.823 [2024-11-28 18:49:08.139838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.823 "name": "Existed_Raid", 00:08:38.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.823 "strip_size_kb": 64, 00:08:38.823 "state": "configuring", 00:08:38.823 "raid_level": "raid0", 00:08:38.823 "superblock": false, 00:08:38.823 "num_base_bdevs": 3, 00:08:38.823 "num_base_bdevs_discovered": 1, 00:08:38.823 "num_base_bdevs_operational": 3, 00:08:38.823 "base_bdevs_list": [ 00:08:38.823 { 00:08:38.823 "name": "BaseBdev1", 00:08:38.823 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:38.823 "is_configured": true, 00:08:38.823 "data_offset": 0, 00:08:38.823 "data_size": 65536 00:08:38.823 }, 00:08:38.823 { 00:08:38.823 "name": "BaseBdev2", 00:08:38.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.823 "is_configured": false, 00:08:38.823 "data_offset": 0, 00:08:38.823 "data_size": 0 00:08:38.823 }, 00:08:38.823 { 00:08:38.823 "name": "BaseBdev3", 00:08:38.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.823 "is_configured": false, 00:08:38.823 "data_offset": 0, 00:08:38.823 "data_size": 0 00:08:38.823 } 00:08:38.823 ] 00:08:38.823 }' 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.823 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 [2024-11-28 18:49:08.616921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.083 BaseBdev2 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.083 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 [ 00:08:39.084 { 00:08:39.084 "name": "BaseBdev2", 00:08:39.084 "aliases": [ 00:08:39.084 "c930115c-2e60-42db-949f-79bf88848753" 00:08:39.084 ], 00:08:39.084 "product_name": "Malloc disk", 00:08:39.084 "block_size": 512, 00:08:39.084 "num_blocks": 65536, 00:08:39.084 "uuid": "c930115c-2e60-42db-949f-79bf88848753", 00:08:39.084 "assigned_rate_limits": { 00:08:39.084 "rw_ios_per_sec": 0, 00:08:39.084 "rw_mbytes_per_sec": 0, 00:08:39.084 "r_mbytes_per_sec": 0, 00:08:39.084 "w_mbytes_per_sec": 0 00:08:39.084 }, 00:08:39.084 "claimed": true, 00:08:39.084 "claim_type": "exclusive_write", 00:08:39.084 "zoned": false, 00:08:39.084 "supported_io_types": { 00:08:39.084 "read": true, 00:08:39.084 "write": true, 00:08:39.084 "unmap": true, 00:08:39.084 "flush": true, 00:08:39.084 "reset": true, 00:08:39.084 "nvme_admin": false, 00:08:39.084 "nvme_io": false, 00:08:39.084 "nvme_io_md": false, 00:08:39.084 "write_zeroes": true, 00:08:39.084 "zcopy": true, 00:08:39.084 "get_zone_info": false, 00:08:39.084 "zone_management": false, 00:08:39.084 "zone_append": false, 00:08:39.084 "compare": false, 00:08:39.084 "compare_and_write": false, 00:08:39.084 "abort": true, 00:08:39.084 "seek_hole": false, 00:08:39.084 "seek_data": false, 00:08:39.084 "copy": true, 00:08:39.084 "nvme_iov_md": false 00:08:39.084 }, 00:08:39.084 "memory_domains": [ 00:08:39.084 { 00:08:39.084 "dma_device_id": "system", 00:08:39.084 "dma_device_type": 1 00:08:39.084 }, 00:08:39.084 { 00:08:39.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.084 "dma_device_type": 2 00:08:39.084 } 00:08:39.084 ], 00:08:39.084 "driver_specific": {} 00:08:39.084 } 00:08:39.084 ] 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.084 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.344 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.344 "name": "Existed_Raid", 00:08:39.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.344 "strip_size_kb": 64, 00:08:39.344 "state": "configuring", 00:08:39.344 "raid_level": "raid0", 00:08:39.344 "superblock": false, 00:08:39.344 "num_base_bdevs": 3, 00:08:39.344 "num_base_bdevs_discovered": 2, 00:08:39.344 "num_base_bdevs_operational": 3, 00:08:39.344 "base_bdevs_list": [ 00:08:39.344 { 00:08:39.344 "name": "BaseBdev1", 00:08:39.344 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:39.344 "is_configured": true, 00:08:39.344 "data_offset": 0, 00:08:39.344 "data_size": 65536 00:08:39.344 }, 00:08:39.344 { 00:08:39.344 "name": "BaseBdev2", 00:08:39.344 "uuid": "c930115c-2e60-42db-949f-79bf88848753", 00:08:39.344 "is_configured": true, 00:08:39.344 "data_offset": 0, 00:08:39.344 "data_size": 65536 00:08:39.344 }, 00:08:39.344 { 00:08:39.344 "name": "BaseBdev3", 00:08:39.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.344 "is_configured": false, 00:08:39.344 "data_offset": 0, 00:08:39.344 "data_size": 0 00:08:39.344 } 00:08:39.344 ] 00:08:39.344 }' 00:08:39.344 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.344 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 [2024-11-28 18:49:09.092728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.605 [2024-11-28 18:49:09.092844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:39.605 [2024-11-28 18:49:09.092878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:39.605 [2024-11-28 18:49:09.093955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:39.605 [2024-11-28 18:49:09.094396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:39.605 [2024-11-28 18:49:09.094484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:39.605 [2024-11-28 18:49:09.095089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.605 BaseBdev3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 [ 00:08:39.605 { 00:08:39.605 "name": "BaseBdev3", 00:08:39.605 "aliases": [ 00:08:39.605 "57f4f1eb-9b57-4e80-b108-2f752e5c7e11" 00:08:39.605 ], 00:08:39.605 "product_name": "Malloc disk", 00:08:39.605 "block_size": 512, 00:08:39.605 "num_blocks": 65536, 00:08:39.605 "uuid": "57f4f1eb-9b57-4e80-b108-2f752e5c7e11", 00:08:39.605 "assigned_rate_limits": { 00:08:39.605 "rw_ios_per_sec": 0, 00:08:39.605 "rw_mbytes_per_sec": 0, 00:08:39.605 "r_mbytes_per_sec": 0, 00:08:39.605 "w_mbytes_per_sec": 0 00:08:39.605 }, 00:08:39.605 "claimed": true, 00:08:39.605 "claim_type": "exclusive_write", 00:08:39.605 "zoned": false, 00:08:39.605 "supported_io_types": { 00:08:39.605 "read": true, 00:08:39.605 "write": true, 00:08:39.605 "unmap": true, 00:08:39.605 "flush": true, 00:08:39.605 "reset": true, 00:08:39.605 "nvme_admin": false, 00:08:39.605 "nvme_io": false, 00:08:39.605 "nvme_io_md": false, 00:08:39.605 "write_zeroes": true, 00:08:39.605 "zcopy": true, 00:08:39.605 "get_zone_info": false, 00:08:39.605 "zone_management": false, 00:08:39.605 "zone_append": false, 00:08:39.605 "compare": false, 00:08:39.605 "compare_and_write": false, 00:08:39.605 "abort": true, 00:08:39.605 "seek_hole": false, 00:08:39.605 "seek_data": false, 00:08:39.605 "copy": true, 00:08:39.605 "nvme_iov_md": false 00:08:39.605 }, 00:08:39.605 "memory_domains": [ 00:08:39.605 { 00:08:39.605 "dma_device_id": "system", 00:08:39.605 "dma_device_type": 1 00:08:39.605 }, 00:08:39.605 { 00:08:39.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.605 "dma_device_type": 2 00:08:39.605 } 00:08:39.605 ], 00:08:39.605 "driver_specific": {} 00:08:39.605 } 00:08:39.605 ] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.605 "name": "Existed_Raid", 00:08:39.605 "uuid": "a096ae24-cfcb-485f-8c84-5602678a7d66", 00:08:39.605 "strip_size_kb": 64, 00:08:39.605 "state": "online", 00:08:39.605 "raid_level": "raid0", 00:08:39.605 "superblock": false, 00:08:39.605 "num_base_bdevs": 3, 00:08:39.605 "num_base_bdevs_discovered": 3, 00:08:39.605 "num_base_bdevs_operational": 3, 00:08:39.605 "base_bdevs_list": [ 00:08:39.605 { 00:08:39.605 "name": "BaseBdev1", 00:08:39.605 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:39.605 "is_configured": true, 00:08:39.605 "data_offset": 0, 00:08:39.605 "data_size": 65536 00:08:39.605 }, 00:08:39.605 { 00:08:39.605 "name": "BaseBdev2", 00:08:39.605 "uuid": "c930115c-2e60-42db-949f-79bf88848753", 00:08:39.605 "is_configured": true, 00:08:39.605 "data_offset": 0, 00:08:39.605 "data_size": 65536 00:08:39.605 }, 00:08:39.605 { 00:08:39.605 "name": "BaseBdev3", 00:08:39.605 "uuid": "57f4f1eb-9b57-4e80-b108-2f752e5c7e11", 00:08:39.605 "is_configured": true, 00:08:39.605 "data_offset": 0, 00:08:39.605 "data_size": 65536 00:08:39.605 } 00:08:39.605 ] 00:08:39.605 }' 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.605 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.175 [2024-11-28 18:49:09.581095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.175 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.175 "name": "Existed_Raid", 00:08:40.175 "aliases": [ 00:08:40.175 "a096ae24-cfcb-485f-8c84-5602678a7d66" 00:08:40.175 ], 00:08:40.175 "product_name": "Raid Volume", 00:08:40.175 "block_size": 512, 00:08:40.175 "num_blocks": 196608, 00:08:40.175 "uuid": "a096ae24-cfcb-485f-8c84-5602678a7d66", 00:08:40.175 "assigned_rate_limits": { 00:08:40.175 "rw_ios_per_sec": 0, 00:08:40.175 "rw_mbytes_per_sec": 0, 00:08:40.175 "r_mbytes_per_sec": 0, 00:08:40.175 "w_mbytes_per_sec": 0 00:08:40.175 }, 00:08:40.175 "claimed": false, 00:08:40.175 "zoned": false, 00:08:40.175 "supported_io_types": { 00:08:40.175 "read": true, 00:08:40.175 "write": true, 00:08:40.175 "unmap": true, 00:08:40.175 "flush": true, 00:08:40.175 "reset": true, 00:08:40.175 "nvme_admin": false, 00:08:40.175 "nvme_io": false, 00:08:40.175 "nvme_io_md": false, 00:08:40.176 "write_zeroes": true, 00:08:40.176 "zcopy": false, 00:08:40.176 "get_zone_info": false, 00:08:40.176 "zone_management": false, 00:08:40.176 "zone_append": false, 00:08:40.176 "compare": false, 00:08:40.176 "compare_and_write": false, 00:08:40.176 "abort": false, 00:08:40.176 "seek_hole": false, 00:08:40.176 "seek_data": false, 00:08:40.176 "copy": false, 00:08:40.176 "nvme_iov_md": false 00:08:40.176 }, 00:08:40.176 "memory_domains": [ 00:08:40.176 { 00:08:40.176 "dma_device_id": "system", 00:08:40.176 "dma_device_type": 1 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.176 "dma_device_type": 2 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "dma_device_id": "system", 00:08:40.176 "dma_device_type": 1 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.176 "dma_device_type": 2 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "dma_device_id": "system", 00:08:40.176 "dma_device_type": 1 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.176 "dma_device_type": 2 00:08:40.176 } 00:08:40.176 ], 00:08:40.176 "driver_specific": { 00:08:40.176 "raid": { 00:08:40.176 "uuid": "a096ae24-cfcb-485f-8c84-5602678a7d66", 00:08:40.176 "strip_size_kb": 64, 00:08:40.176 "state": "online", 00:08:40.176 "raid_level": "raid0", 00:08:40.176 "superblock": false, 00:08:40.176 "num_base_bdevs": 3, 00:08:40.176 "num_base_bdevs_discovered": 3, 00:08:40.176 "num_base_bdevs_operational": 3, 00:08:40.176 "base_bdevs_list": [ 00:08:40.176 { 00:08:40.176 "name": "BaseBdev1", 00:08:40.176 "uuid": "272131ac-b7d3-46f9-b672-f28b925ff5b7", 00:08:40.176 "is_configured": true, 00:08:40.176 "data_offset": 0, 00:08:40.176 "data_size": 65536 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "name": "BaseBdev2", 00:08:40.176 "uuid": "c930115c-2e60-42db-949f-79bf88848753", 00:08:40.176 "is_configured": true, 00:08:40.176 "data_offset": 0, 00:08:40.176 "data_size": 65536 00:08:40.176 }, 00:08:40.176 { 00:08:40.176 "name": "BaseBdev3", 00:08:40.176 "uuid": "57f4f1eb-9b57-4e80-b108-2f752e5c7e11", 00:08:40.176 "is_configured": true, 00:08:40.176 "data_offset": 0, 00:08:40.176 "data_size": 65536 00:08:40.176 } 00:08:40.176 ] 00:08:40.176 } 00:08:40.176 } 00:08:40.176 }' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.176 BaseBdev2 00:08:40.176 BaseBdev3' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.176 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.437 [2024-11-28 18:49:09.824931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.437 [2024-11-28 18:49:09.824960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.437 [2024-11-28 18:49:09.825006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.437 "name": "Existed_Raid", 00:08:40.437 "uuid": "a096ae24-cfcb-485f-8c84-5602678a7d66", 00:08:40.437 "strip_size_kb": 64, 00:08:40.437 "state": "offline", 00:08:40.437 "raid_level": "raid0", 00:08:40.437 "superblock": false, 00:08:40.437 "num_base_bdevs": 3, 00:08:40.437 "num_base_bdevs_discovered": 2, 00:08:40.437 "num_base_bdevs_operational": 2, 00:08:40.437 "base_bdevs_list": [ 00:08:40.437 { 00:08:40.437 "name": null, 00:08:40.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.437 "is_configured": false, 00:08:40.437 "data_offset": 0, 00:08:40.437 "data_size": 65536 00:08:40.437 }, 00:08:40.437 { 00:08:40.437 "name": "BaseBdev2", 00:08:40.437 "uuid": "c930115c-2e60-42db-949f-79bf88848753", 00:08:40.437 "is_configured": true, 00:08:40.437 "data_offset": 0, 00:08:40.437 "data_size": 65536 00:08:40.437 }, 00:08:40.437 { 00:08:40.437 "name": "BaseBdev3", 00:08:40.437 "uuid": "57f4f1eb-9b57-4e80-b108-2f752e5c7e11", 00:08:40.437 "is_configured": true, 00:08:40.437 "data_offset": 0, 00:08:40.437 "data_size": 65536 00:08:40.437 } 00:08:40.437 ] 00:08:40.437 }' 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.437 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 [2024-11-28 18:49:10.376153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 [2024-11-28 18:49:10.431243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.010 [2024-11-28 18:49:10.431296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.010 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.010 [ 00:08:41.010 { 00:08:41.010 "name": "BaseBdev2", 00:08:41.010 "aliases": [ 00:08:41.010 "cb752023-ee50-4b7a-9369-9fe24e6afa93" 00:08:41.010 ], 00:08:41.010 "product_name": "Malloc disk", 00:08:41.010 "block_size": 512, 00:08:41.010 "num_blocks": 65536, 00:08:41.010 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:41.010 "assigned_rate_limits": { 00:08:41.010 "rw_ios_per_sec": 0, 00:08:41.010 "rw_mbytes_per_sec": 0, 00:08:41.010 "r_mbytes_per_sec": 0, 00:08:41.010 "w_mbytes_per_sec": 0 00:08:41.010 }, 00:08:41.010 "claimed": false, 00:08:41.010 "zoned": false, 00:08:41.010 "supported_io_types": { 00:08:41.010 "read": true, 00:08:41.011 "write": true, 00:08:41.011 "unmap": true, 00:08:41.011 "flush": true, 00:08:41.011 "reset": true, 00:08:41.011 "nvme_admin": false, 00:08:41.011 "nvme_io": false, 00:08:41.011 "nvme_io_md": false, 00:08:41.011 "write_zeroes": true, 00:08:41.011 "zcopy": true, 00:08:41.011 "get_zone_info": false, 00:08:41.011 "zone_management": false, 00:08:41.011 "zone_append": false, 00:08:41.011 "compare": false, 00:08:41.011 "compare_and_write": false, 00:08:41.011 "abort": true, 00:08:41.011 "seek_hole": false, 00:08:41.011 "seek_data": false, 00:08:41.011 "copy": true, 00:08:41.011 "nvme_iov_md": false 00:08:41.011 }, 00:08:41.011 "memory_domains": [ 00:08:41.011 { 00:08:41.011 "dma_device_id": "system", 00:08:41.011 "dma_device_type": 1 00:08:41.011 }, 00:08:41.011 { 00:08:41.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.011 "dma_device_type": 2 00:08:41.011 } 00:08:41.011 ], 00:08:41.011 "driver_specific": {} 00:08:41.011 } 00:08:41.011 ] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.011 BaseBdev3 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.011 [ 00:08:41.011 { 00:08:41.011 "name": "BaseBdev3", 00:08:41.011 "aliases": [ 00:08:41.011 "702566b5-e253-4fde-8d57-a222be38ebac" 00:08:41.011 ], 00:08:41.011 "product_name": "Malloc disk", 00:08:41.011 "block_size": 512, 00:08:41.011 "num_blocks": 65536, 00:08:41.011 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:41.011 "assigned_rate_limits": { 00:08:41.011 "rw_ios_per_sec": 0, 00:08:41.011 "rw_mbytes_per_sec": 0, 00:08:41.011 "r_mbytes_per_sec": 0, 00:08:41.011 "w_mbytes_per_sec": 0 00:08:41.011 }, 00:08:41.011 "claimed": false, 00:08:41.011 "zoned": false, 00:08:41.011 "supported_io_types": { 00:08:41.011 "read": true, 00:08:41.011 "write": true, 00:08:41.011 "unmap": true, 00:08:41.011 "flush": true, 00:08:41.011 "reset": true, 00:08:41.011 "nvme_admin": false, 00:08:41.011 "nvme_io": false, 00:08:41.011 "nvme_io_md": false, 00:08:41.011 "write_zeroes": true, 00:08:41.011 "zcopy": true, 00:08:41.011 "get_zone_info": false, 00:08:41.011 "zone_management": false, 00:08:41.011 "zone_append": false, 00:08:41.011 "compare": false, 00:08:41.011 "compare_and_write": false, 00:08:41.011 "abort": true, 00:08:41.011 "seek_hole": false, 00:08:41.011 "seek_data": false, 00:08:41.011 "copy": true, 00:08:41.011 "nvme_iov_md": false 00:08:41.011 }, 00:08:41.011 "memory_domains": [ 00:08:41.011 { 00:08:41.011 "dma_device_id": "system", 00:08:41.011 "dma_device_type": 1 00:08:41.011 }, 00:08:41.011 { 00:08:41.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.011 "dma_device_type": 2 00:08:41.011 } 00:08:41.011 ], 00:08:41.011 "driver_specific": {} 00:08:41.011 } 00:08:41.011 ] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.011 [2024-11-28 18:49:10.605900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.011 [2024-11-28 18:49:10.605947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.011 [2024-11-28 18:49:10.605967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.011 [2024-11-28 18:49:10.607764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.011 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.279 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.279 "name": "Existed_Raid", 00:08:41.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.279 "strip_size_kb": 64, 00:08:41.279 "state": "configuring", 00:08:41.279 "raid_level": "raid0", 00:08:41.279 "superblock": false, 00:08:41.279 "num_base_bdevs": 3, 00:08:41.279 "num_base_bdevs_discovered": 2, 00:08:41.279 "num_base_bdevs_operational": 3, 00:08:41.279 "base_bdevs_list": [ 00:08:41.279 { 00:08:41.279 "name": "BaseBdev1", 00:08:41.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.279 "is_configured": false, 00:08:41.279 "data_offset": 0, 00:08:41.279 "data_size": 0 00:08:41.279 }, 00:08:41.279 { 00:08:41.279 "name": "BaseBdev2", 00:08:41.279 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:41.279 "is_configured": true, 00:08:41.279 "data_offset": 0, 00:08:41.279 "data_size": 65536 00:08:41.279 }, 00:08:41.280 { 00:08:41.280 "name": "BaseBdev3", 00:08:41.280 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:41.280 "is_configured": true, 00:08:41.280 "data_offset": 0, 00:08:41.280 "data_size": 65536 00:08:41.280 } 00:08:41.280 ] 00:08:41.280 }' 00:08:41.280 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.280 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.554 [2024-11-28 18:49:11.054013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.554 "name": "Existed_Raid", 00:08:41.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.554 "strip_size_kb": 64, 00:08:41.554 "state": "configuring", 00:08:41.554 "raid_level": "raid0", 00:08:41.554 "superblock": false, 00:08:41.554 "num_base_bdevs": 3, 00:08:41.554 "num_base_bdevs_discovered": 1, 00:08:41.554 "num_base_bdevs_operational": 3, 00:08:41.554 "base_bdevs_list": [ 00:08:41.554 { 00:08:41.554 "name": "BaseBdev1", 00:08:41.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.554 "is_configured": false, 00:08:41.554 "data_offset": 0, 00:08:41.554 "data_size": 0 00:08:41.554 }, 00:08:41.554 { 00:08:41.554 "name": null, 00:08:41.554 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:41.554 "is_configured": false, 00:08:41.554 "data_offset": 0, 00:08:41.554 "data_size": 65536 00:08:41.554 }, 00:08:41.554 { 00:08:41.554 "name": "BaseBdev3", 00:08:41.554 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:41.554 "is_configured": true, 00:08:41.554 "data_offset": 0, 00:08:41.554 "data_size": 65536 00:08:41.554 } 00:08:41.554 ] 00:08:41.554 }' 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.554 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.124 [2024-11-28 18:49:11.533063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.124 BaseBdev1 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.124 [ 00:08:42.124 { 00:08:42.124 "name": "BaseBdev1", 00:08:42.124 "aliases": [ 00:08:42.124 "8726e8c0-4be5-433a-aa61-2c4c2e708bbb" 00:08:42.124 ], 00:08:42.124 "product_name": "Malloc disk", 00:08:42.124 "block_size": 512, 00:08:42.124 "num_blocks": 65536, 00:08:42.124 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:42.124 "assigned_rate_limits": { 00:08:42.124 "rw_ios_per_sec": 0, 00:08:42.124 "rw_mbytes_per_sec": 0, 00:08:42.124 "r_mbytes_per_sec": 0, 00:08:42.124 "w_mbytes_per_sec": 0 00:08:42.124 }, 00:08:42.124 "claimed": true, 00:08:42.124 "claim_type": "exclusive_write", 00:08:42.124 "zoned": false, 00:08:42.124 "supported_io_types": { 00:08:42.124 "read": true, 00:08:42.124 "write": true, 00:08:42.124 "unmap": true, 00:08:42.124 "flush": true, 00:08:42.124 "reset": true, 00:08:42.124 "nvme_admin": false, 00:08:42.124 "nvme_io": false, 00:08:42.124 "nvme_io_md": false, 00:08:42.124 "write_zeroes": true, 00:08:42.124 "zcopy": true, 00:08:42.124 "get_zone_info": false, 00:08:42.124 "zone_management": false, 00:08:42.124 "zone_append": false, 00:08:42.124 "compare": false, 00:08:42.124 "compare_and_write": false, 00:08:42.124 "abort": true, 00:08:42.124 "seek_hole": false, 00:08:42.124 "seek_data": false, 00:08:42.124 "copy": true, 00:08:42.124 "nvme_iov_md": false 00:08:42.124 }, 00:08:42.124 "memory_domains": [ 00:08:42.124 { 00:08:42.124 "dma_device_id": "system", 00:08:42.124 "dma_device_type": 1 00:08:42.124 }, 00:08:42.124 { 00:08:42.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.124 "dma_device_type": 2 00:08:42.124 } 00:08:42.124 ], 00:08:42.124 "driver_specific": {} 00:08:42.124 } 00:08:42.124 ] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.124 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.125 "name": "Existed_Raid", 00:08:42.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.125 "strip_size_kb": 64, 00:08:42.125 "state": "configuring", 00:08:42.125 "raid_level": "raid0", 00:08:42.125 "superblock": false, 00:08:42.125 "num_base_bdevs": 3, 00:08:42.125 "num_base_bdevs_discovered": 2, 00:08:42.125 "num_base_bdevs_operational": 3, 00:08:42.125 "base_bdevs_list": [ 00:08:42.125 { 00:08:42.125 "name": "BaseBdev1", 00:08:42.125 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:42.125 "is_configured": true, 00:08:42.125 "data_offset": 0, 00:08:42.125 "data_size": 65536 00:08:42.125 }, 00:08:42.125 { 00:08:42.125 "name": null, 00:08:42.125 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:42.125 "is_configured": false, 00:08:42.125 "data_offset": 0, 00:08:42.125 "data_size": 65536 00:08:42.125 }, 00:08:42.125 { 00:08:42.125 "name": "BaseBdev3", 00:08:42.125 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:42.125 "is_configured": true, 00:08:42.125 "data_offset": 0, 00:08:42.125 "data_size": 65536 00:08:42.125 } 00:08:42.125 ] 00:08:42.125 }' 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.125 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.384 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.384 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.384 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.385 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.644 [2024-11-28 18:49:12.049258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.644 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.645 "name": "Existed_Raid", 00:08:42.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.645 "strip_size_kb": 64, 00:08:42.645 "state": "configuring", 00:08:42.645 "raid_level": "raid0", 00:08:42.645 "superblock": false, 00:08:42.645 "num_base_bdevs": 3, 00:08:42.645 "num_base_bdevs_discovered": 1, 00:08:42.645 "num_base_bdevs_operational": 3, 00:08:42.645 "base_bdevs_list": [ 00:08:42.645 { 00:08:42.645 "name": "BaseBdev1", 00:08:42.645 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:42.645 "is_configured": true, 00:08:42.645 "data_offset": 0, 00:08:42.645 "data_size": 65536 00:08:42.645 }, 00:08:42.645 { 00:08:42.645 "name": null, 00:08:42.645 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:42.645 "is_configured": false, 00:08:42.645 "data_offset": 0, 00:08:42.645 "data_size": 65536 00:08:42.645 }, 00:08:42.645 { 00:08:42.645 "name": null, 00:08:42.645 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:42.645 "is_configured": false, 00:08:42.645 "data_offset": 0, 00:08:42.645 "data_size": 65536 00:08:42.645 } 00:08:42.645 ] 00:08:42.645 }' 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.645 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.905 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.905 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.905 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.905 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.905 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.165 [2024-11-28 18:49:12.513396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.165 "name": "Existed_Raid", 00:08:43.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.165 "strip_size_kb": 64, 00:08:43.165 "state": "configuring", 00:08:43.165 "raid_level": "raid0", 00:08:43.165 "superblock": false, 00:08:43.165 "num_base_bdevs": 3, 00:08:43.165 "num_base_bdevs_discovered": 2, 00:08:43.165 "num_base_bdevs_operational": 3, 00:08:43.165 "base_bdevs_list": [ 00:08:43.165 { 00:08:43.165 "name": "BaseBdev1", 00:08:43.165 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:43.165 "is_configured": true, 00:08:43.165 "data_offset": 0, 00:08:43.165 "data_size": 65536 00:08:43.165 }, 00:08:43.165 { 00:08:43.165 "name": null, 00:08:43.165 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:43.165 "is_configured": false, 00:08:43.165 "data_offset": 0, 00:08:43.165 "data_size": 65536 00:08:43.165 }, 00:08:43.165 { 00:08:43.165 "name": "BaseBdev3", 00:08:43.165 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:43.165 "is_configured": true, 00:08:43.165 "data_offset": 0, 00:08:43.165 "data_size": 65536 00:08:43.165 } 00:08:43.165 ] 00:08:43.165 }' 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.165 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.426 [2024-11-28 18:49:12.949533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.426 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.426 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.426 "name": "Existed_Raid", 00:08:43.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.426 "strip_size_kb": 64, 00:08:43.426 "state": "configuring", 00:08:43.426 "raid_level": "raid0", 00:08:43.426 "superblock": false, 00:08:43.426 "num_base_bdevs": 3, 00:08:43.426 "num_base_bdevs_discovered": 1, 00:08:43.426 "num_base_bdevs_operational": 3, 00:08:43.426 "base_bdevs_list": [ 00:08:43.426 { 00:08:43.426 "name": null, 00:08:43.426 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:43.426 "is_configured": false, 00:08:43.426 "data_offset": 0, 00:08:43.426 "data_size": 65536 00:08:43.426 }, 00:08:43.426 { 00:08:43.426 "name": null, 00:08:43.426 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:43.426 "is_configured": false, 00:08:43.426 "data_offset": 0, 00:08:43.426 "data_size": 65536 00:08:43.426 }, 00:08:43.426 { 00:08:43.426 "name": "BaseBdev3", 00:08:43.426 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:43.426 "is_configured": true, 00:08:43.426 "data_offset": 0, 00:08:43.426 "data_size": 65536 00:08:43.426 } 00:08:43.426 ] 00:08:43.426 }' 00:08:43.426 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.426 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 [2024-11-28 18:49:13.411882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.996 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.996 "name": "Existed_Raid", 00:08:43.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.996 "strip_size_kb": 64, 00:08:43.996 "state": "configuring", 00:08:43.996 "raid_level": "raid0", 00:08:43.996 "superblock": false, 00:08:43.996 "num_base_bdevs": 3, 00:08:43.996 "num_base_bdevs_discovered": 2, 00:08:43.996 "num_base_bdevs_operational": 3, 00:08:43.996 "base_bdevs_list": [ 00:08:43.996 { 00:08:43.996 "name": null, 00:08:43.996 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:43.996 "is_configured": false, 00:08:43.996 "data_offset": 0, 00:08:43.996 "data_size": 65536 00:08:43.996 }, 00:08:43.996 { 00:08:43.996 "name": "BaseBdev2", 00:08:43.996 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:43.996 "is_configured": true, 00:08:43.996 "data_offset": 0, 00:08:43.997 "data_size": 65536 00:08:43.997 }, 00:08:43.997 { 00:08:43.997 "name": "BaseBdev3", 00:08:43.997 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:43.997 "is_configured": true, 00:08:43.997 "data_offset": 0, 00:08:43.997 "data_size": 65536 00:08:43.997 } 00:08:43.997 ] 00:08:43.997 }' 00:08:43.997 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.997 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.256 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.256 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.256 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.256 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.256 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8726e8c0-4be5-433a-aa61-2c4c2e708bbb 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.516 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.516 [2024-11-28 18:49:13.922867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.516 [2024-11-28 18:49:13.922916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.516 [2024-11-28 18:49:13.922924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:44.516 [2024-11-28 18:49:13.923198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:44.516 [2024-11-28 18:49:13.923328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.516 [2024-11-28 18:49:13.923350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:44.517 [2024-11-28 18:49:13.923535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.517 NewBaseBdev 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 [ 00:08:44.517 { 00:08:44.517 "name": "NewBaseBdev", 00:08:44.517 "aliases": [ 00:08:44.517 "8726e8c0-4be5-433a-aa61-2c4c2e708bbb" 00:08:44.517 ], 00:08:44.517 "product_name": "Malloc disk", 00:08:44.517 "block_size": 512, 00:08:44.517 "num_blocks": 65536, 00:08:44.517 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:44.517 "assigned_rate_limits": { 00:08:44.517 "rw_ios_per_sec": 0, 00:08:44.517 "rw_mbytes_per_sec": 0, 00:08:44.517 "r_mbytes_per_sec": 0, 00:08:44.517 "w_mbytes_per_sec": 0 00:08:44.517 }, 00:08:44.517 "claimed": true, 00:08:44.517 "claim_type": "exclusive_write", 00:08:44.517 "zoned": false, 00:08:44.517 "supported_io_types": { 00:08:44.517 "read": true, 00:08:44.517 "write": true, 00:08:44.517 "unmap": true, 00:08:44.517 "flush": true, 00:08:44.517 "reset": true, 00:08:44.517 "nvme_admin": false, 00:08:44.517 "nvme_io": false, 00:08:44.517 "nvme_io_md": false, 00:08:44.517 "write_zeroes": true, 00:08:44.517 "zcopy": true, 00:08:44.517 "get_zone_info": false, 00:08:44.517 "zone_management": false, 00:08:44.517 "zone_append": false, 00:08:44.517 "compare": false, 00:08:44.517 "compare_and_write": false, 00:08:44.517 "abort": true, 00:08:44.517 "seek_hole": false, 00:08:44.517 "seek_data": false, 00:08:44.517 "copy": true, 00:08:44.517 "nvme_iov_md": false 00:08:44.517 }, 00:08:44.517 "memory_domains": [ 00:08:44.517 { 00:08:44.517 "dma_device_id": "system", 00:08:44.517 "dma_device_type": 1 00:08:44.517 }, 00:08:44.517 { 00:08:44.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.517 "dma_device_type": 2 00:08:44.517 } 00:08:44.517 ], 00:08:44.517 "driver_specific": {} 00:08:44.517 } 00:08:44.517 ] 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.517 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.517 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.517 "name": "Existed_Raid", 00:08:44.517 "uuid": "3af6a26d-a81e-4576-afb9-3d2adba35f4a", 00:08:44.517 "strip_size_kb": 64, 00:08:44.517 "state": "online", 00:08:44.517 "raid_level": "raid0", 00:08:44.517 "superblock": false, 00:08:44.517 "num_base_bdevs": 3, 00:08:44.517 "num_base_bdevs_discovered": 3, 00:08:44.517 "num_base_bdevs_operational": 3, 00:08:44.517 "base_bdevs_list": [ 00:08:44.517 { 00:08:44.517 "name": "NewBaseBdev", 00:08:44.517 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:44.517 "is_configured": true, 00:08:44.517 "data_offset": 0, 00:08:44.517 "data_size": 65536 00:08:44.517 }, 00:08:44.517 { 00:08:44.517 "name": "BaseBdev2", 00:08:44.517 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:44.517 "is_configured": true, 00:08:44.517 "data_offset": 0, 00:08:44.517 "data_size": 65536 00:08:44.517 }, 00:08:44.517 { 00:08:44.517 "name": "BaseBdev3", 00:08:44.517 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:44.517 "is_configured": true, 00:08:44.517 "data_offset": 0, 00:08:44.517 "data_size": 65536 00:08:44.517 } 00:08:44.517 ] 00:08:44.517 }' 00:08:44.517 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.517 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.777 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.777 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.777 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.777 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.038 [2024-11-28 18:49:14.391371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.038 "name": "Existed_Raid", 00:08:45.038 "aliases": [ 00:08:45.038 "3af6a26d-a81e-4576-afb9-3d2adba35f4a" 00:08:45.038 ], 00:08:45.038 "product_name": "Raid Volume", 00:08:45.038 "block_size": 512, 00:08:45.038 "num_blocks": 196608, 00:08:45.038 "uuid": "3af6a26d-a81e-4576-afb9-3d2adba35f4a", 00:08:45.038 "assigned_rate_limits": { 00:08:45.038 "rw_ios_per_sec": 0, 00:08:45.038 "rw_mbytes_per_sec": 0, 00:08:45.038 "r_mbytes_per_sec": 0, 00:08:45.038 "w_mbytes_per_sec": 0 00:08:45.038 }, 00:08:45.038 "claimed": false, 00:08:45.038 "zoned": false, 00:08:45.038 "supported_io_types": { 00:08:45.038 "read": true, 00:08:45.038 "write": true, 00:08:45.038 "unmap": true, 00:08:45.038 "flush": true, 00:08:45.038 "reset": true, 00:08:45.038 "nvme_admin": false, 00:08:45.038 "nvme_io": false, 00:08:45.038 "nvme_io_md": false, 00:08:45.038 "write_zeroes": true, 00:08:45.038 "zcopy": false, 00:08:45.038 "get_zone_info": false, 00:08:45.038 "zone_management": false, 00:08:45.038 "zone_append": false, 00:08:45.038 "compare": false, 00:08:45.038 "compare_and_write": false, 00:08:45.038 "abort": false, 00:08:45.038 "seek_hole": false, 00:08:45.038 "seek_data": false, 00:08:45.038 "copy": false, 00:08:45.038 "nvme_iov_md": false 00:08:45.038 }, 00:08:45.038 "memory_domains": [ 00:08:45.038 { 00:08:45.038 "dma_device_id": "system", 00:08:45.038 "dma_device_type": 1 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.038 "dma_device_type": 2 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "dma_device_id": "system", 00:08:45.038 "dma_device_type": 1 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.038 "dma_device_type": 2 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "dma_device_id": "system", 00:08:45.038 "dma_device_type": 1 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.038 "dma_device_type": 2 00:08:45.038 } 00:08:45.038 ], 00:08:45.038 "driver_specific": { 00:08:45.038 "raid": { 00:08:45.038 "uuid": "3af6a26d-a81e-4576-afb9-3d2adba35f4a", 00:08:45.038 "strip_size_kb": 64, 00:08:45.038 "state": "online", 00:08:45.038 "raid_level": "raid0", 00:08:45.038 "superblock": false, 00:08:45.038 "num_base_bdevs": 3, 00:08:45.038 "num_base_bdevs_discovered": 3, 00:08:45.038 "num_base_bdevs_operational": 3, 00:08:45.038 "base_bdevs_list": [ 00:08:45.038 { 00:08:45.038 "name": "NewBaseBdev", 00:08:45.038 "uuid": "8726e8c0-4be5-433a-aa61-2c4c2e708bbb", 00:08:45.038 "is_configured": true, 00:08:45.038 "data_offset": 0, 00:08:45.038 "data_size": 65536 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "name": "BaseBdev2", 00:08:45.038 "uuid": "cb752023-ee50-4b7a-9369-9fe24e6afa93", 00:08:45.038 "is_configured": true, 00:08:45.038 "data_offset": 0, 00:08:45.038 "data_size": 65536 00:08:45.038 }, 00:08:45.038 { 00:08:45.038 "name": "BaseBdev3", 00:08:45.038 "uuid": "702566b5-e253-4fde-8d57-a222be38ebac", 00:08:45.038 "is_configured": true, 00:08:45.038 "data_offset": 0, 00:08:45.038 "data_size": 65536 00:08:45.038 } 00:08:45.038 ] 00:08:45.038 } 00:08:45.038 } 00:08:45.038 }' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.038 BaseBdev2 00:08:45.038 BaseBdev3' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.038 [2024-11-28 18:49:14.623110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.038 [2024-11-28 18:49:14.623145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.038 [2024-11-28 18:49:14.623216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.038 [2024-11-28 18:49:14.623272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.038 [2024-11-28 18:49:14.623281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76575 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76575 ']' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76575 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.038 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76575 00:08:45.298 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.298 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.298 killing process with pid 76575 00:08:45.298 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76575' 00:08:45.298 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76575 00:08:45.298 [2024-11-28 18:49:14.671144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.298 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76575 00:08:45.298 [2024-11-28 18:49:14.700564] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.558 00:08:45.558 real 0m8.642s 00:08:45.558 user 0m14.780s 00:08:45.558 sys 0m1.704s 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.558 ************************************ 00:08:45.558 END TEST raid_state_function_test 00:08:45.558 ************************************ 00:08:45.558 18:49:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:45.558 18:49:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.558 18:49:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.558 18:49:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.558 ************************************ 00:08:45.558 START TEST raid_state_function_test_sb 00:08:45.558 ************************************ 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.558 18:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77180 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.558 Process raid pid: 77180 00:08:45.558 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77180' 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77180 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77180 ']' 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.559 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.559 [2024-11-28 18:49:15.085649] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:45.559 [2024-11-28 18:49:15.085786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.817 [2024-11-28 18:49:15.221057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.817 [2024-11-28 18:49:15.258086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.817 [2024-11-28 18:49:15.282946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.817 [2024-11-28 18:49:15.324697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.817 [2024-11-28 18:49:15.324738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.386 [2024-11-28 18:49:15.904124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.386 [2024-11-28 18:49:15.904178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.386 [2024-11-28 18:49:15.904198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.386 [2024-11-28 18:49:15.904206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.386 [2024-11-28 18:49:15.904218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.386 [2024-11-28 18:49:15.904224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.386 "name": "Existed_Raid", 00:08:46.386 "uuid": "4f7f13fd-86ba-41a3-a71e-3053b5cc474f", 00:08:46.386 "strip_size_kb": 64, 00:08:46.386 "state": "configuring", 00:08:46.386 "raid_level": "raid0", 00:08:46.386 "superblock": true, 00:08:46.386 "num_base_bdevs": 3, 00:08:46.386 "num_base_bdevs_discovered": 0, 00:08:46.386 "num_base_bdevs_operational": 3, 00:08:46.386 "base_bdevs_list": [ 00:08:46.386 { 00:08:46.386 "name": "BaseBdev1", 00:08:46.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.386 "is_configured": false, 00:08:46.386 "data_offset": 0, 00:08:46.386 "data_size": 0 00:08:46.386 }, 00:08:46.386 { 00:08:46.386 "name": "BaseBdev2", 00:08:46.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.386 "is_configured": false, 00:08:46.386 "data_offset": 0, 00:08:46.386 "data_size": 0 00:08:46.386 }, 00:08:46.386 { 00:08:46.386 "name": "BaseBdev3", 00:08:46.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.386 "is_configured": false, 00:08:46.386 "data_offset": 0, 00:08:46.386 "data_size": 0 00:08:46.386 } 00:08:46.386 ] 00:08:46.386 }' 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.386 18:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 [2024-11-28 18:49:16.308139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.956 [2024-11-28 18:49:16.308174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 [2024-11-28 18:49:16.316175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.956 [2024-11-28 18:49:16.316212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.956 [2024-11-28 18:49:16.316223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.956 [2024-11-28 18:49:16.316230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.956 [2024-11-28 18:49:16.316237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.956 [2024-11-28 18:49:16.316244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 [2024-11-28 18:49:16.332977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.956 BaseBdev1 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 [ 00:08:46.956 { 00:08:46.956 "name": "BaseBdev1", 00:08:46.956 "aliases": [ 00:08:46.956 "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c" 00:08:46.956 ], 00:08:46.956 "product_name": "Malloc disk", 00:08:46.956 "block_size": 512, 00:08:46.956 "num_blocks": 65536, 00:08:46.956 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:46.956 "assigned_rate_limits": { 00:08:46.956 "rw_ios_per_sec": 0, 00:08:46.956 "rw_mbytes_per_sec": 0, 00:08:46.956 "r_mbytes_per_sec": 0, 00:08:46.956 "w_mbytes_per_sec": 0 00:08:46.956 }, 00:08:46.956 "claimed": true, 00:08:46.956 "claim_type": "exclusive_write", 00:08:46.956 "zoned": false, 00:08:46.956 "supported_io_types": { 00:08:46.956 "read": true, 00:08:46.956 "write": true, 00:08:46.956 "unmap": true, 00:08:46.956 "flush": true, 00:08:46.956 "reset": true, 00:08:46.956 "nvme_admin": false, 00:08:46.956 "nvme_io": false, 00:08:46.956 "nvme_io_md": false, 00:08:46.956 "write_zeroes": true, 00:08:46.956 "zcopy": true, 00:08:46.956 "get_zone_info": false, 00:08:46.956 "zone_management": false, 00:08:46.956 "zone_append": false, 00:08:46.956 "compare": false, 00:08:46.956 "compare_and_write": false, 00:08:46.956 "abort": true, 00:08:46.956 "seek_hole": false, 00:08:46.956 "seek_data": false, 00:08:46.956 "copy": true, 00:08:46.956 "nvme_iov_md": false 00:08:46.956 }, 00:08:46.956 "memory_domains": [ 00:08:46.956 { 00:08:46.956 "dma_device_id": "system", 00:08:46.956 "dma_device_type": 1 00:08:46.956 }, 00:08:46.956 { 00:08:46.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.956 "dma_device_type": 2 00:08:46.956 } 00:08:46.956 ], 00:08:46.956 "driver_specific": {} 00:08:46.956 } 00:08:46.956 ] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.956 "name": "Existed_Raid", 00:08:46.956 "uuid": "f55bfe4d-cbaa-4f45-a5a2-9fbe44262c05", 00:08:46.956 "strip_size_kb": 64, 00:08:46.956 "state": "configuring", 00:08:46.956 "raid_level": "raid0", 00:08:46.956 "superblock": true, 00:08:46.956 "num_base_bdevs": 3, 00:08:46.956 "num_base_bdevs_discovered": 1, 00:08:46.956 "num_base_bdevs_operational": 3, 00:08:46.956 "base_bdevs_list": [ 00:08:46.956 { 00:08:46.956 "name": "BaseBdev1", 00:08:46.956 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:46.956 "is_configured": true, 00:08:46.956 "data_offset": 2048, 00:08:46.956 "data_size": 63488 00:08:46.956 }, 00:08:46.956 { 00:08:46.956 "name": "BaseBdev2", 00:08:46.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.956 "is_configured": false, 00:08:46.956 "data_offset": 0, 00:08:46.956 "data_size": 0 00:08:46.956 }, 00:08:46.956 { 00:08:46.956 "name": "BaseBdev3", 00:08:46.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.956 "is_configured": false, 00:08:46.956 "data_offset": 0, 00:08:46.956 "data_size": 0 00:08:46.956 } 00:08:46.956 ] 00:08:46.956 }' 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.956 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.216 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.216 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.216 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.476 [2024-11-28 18:49:16.821138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.476 [2024-11-28 18:49:16.821188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.476 [2024-11-28 18:49:16.833189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.476 [2024-11-28 18:49:16.835004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.476 [2024-11-28 18:49:16.835041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.476 [2024-11-28 18:49:16.835053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.476 [2024-11-28 18:49:16.835077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.476 "name": "Existed_Raid", 00:08:47.476 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:47.476 "strip_size_kb": 64, 00:08:47.476 "state": "configuring", 00:08:47.476 "raid_level": "raid0", 00:08:47.476 "superblock": true, 00:08:47.476 "num_base_bdevs": 3, 00:08:47.476 "num_base_bdevs_discovered": 1, 00:08:47.476 "num_base_bdevs_operational": 3, 00:08:47.476 "base_bdevs_list": [ 00:08:47.476 { 00:08:47.476 "name": "BaseBdev1", 00:08:47.476 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:47.476 "is_configured": true, 00:08:47.476 "data_offset": 2048, 00:08:47.476 "data_size": 63488 00:08:47.476 }, 00:08:47.476 { 00:08:47.476 "name": "BaseBdev2", 00:08:47.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.476 "is_configured": false, 00:08:47.476 "data_offset": 0, 00:08:47.476 "data_size": 0 00:08:47.476 }, 00:08:47.476 { 00:08:47.476 "name": "BaseBdev3", 00:08:47.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.476 "is_configured": false, 00:08:47.476 "data_offset": 0, 00:08:47.476 "data_size": 0 00:08:47.476 } 00:08:47.476 ] 00:08:47.476 }' 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.476 18:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.737 [2024-11-28 18:49:17.296140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.737 BaseBdev2 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.737 [ 00:08:47.737 { 00:08:47.737 "name": "BaseBdev2", 00:08:47.737 "aliases": [ 00:08:47.737 "dac8cba0-38a3-4a9c-821c-7876fd28c0a5" 00:08:47.737 ], 00:08:47.737 "product_name": "Malloc disk", 00:08:47.737 "block_size": 512, 00:08:47.737 "num_blocks": 65536, 00:08:47.737 "uuid": "dac8cba0-38a3-4a9c-821c-7876fd28c0a5", 00:08:47.737 "assigned_rate_limits": { 00:08:47.737 "rw_ios_per_sec": 0, 00:08:47.737 "rw_mbytes_per_sec": 0, 00:08:47.737 "r_mbytes_per_sec": 0, 00:08:47.737 "w_mbytes_per_sec": 0 00:08:47.737 }, 00:08:47.737 "claimed": true, 00:08:47.737 "claim_type": "exclusive_write", 00:08:47.737 "zoned": false, 00:08:47.737 "supported_io_types": { 00:08:47.737 "read": true, 00:08:47.737 "write": true, 00:08:47.737 "unmap": true, 00:08:47.737 "flush": true, 00:08:47.737 "reset": true, 00:08:47.737 "nvme_admin": false, 00:08:47.737 "nvme_io": false, 00:08:47.737 "nvme_io_md": false, 00:08:47.737 "write_zeroes": true, 00:08:47.737 "zcopy": true, 00:08:47.737 "get_zone_info": false, 00:08:47.737 "zone_management": false, 00:08:47.737 "zone_append": false, 00:08:47.737 "compare": false, 00:08:47.737 "compare_and_write": false, 00:08:47.737 "abort": true, 00:08:47.737 "seek_hole": false, 00:08:47.737 "seek_data": false, 00:08:47.737 "copy": true, 00:08:47.737 "nvme_iov_md": false 00:08:47.737 }, 00:08:47.737 "memory_domains": [ 00:08:47.737 { 00:08:47.737 "dma_device_id": "system", 00:08:47.737 "dma_device_type": 1 00:08:47.737 }, 00:08:47.737 { 00:08:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.737 "dma_device_type": 2 00:08:47.737 } 00:08:47.737 ], 00:08:47.737 "driver_specific": {} 00:08:47.737 } 00:08:47.737 ] 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.737 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.997 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.997 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.997 "name": "Existed_Raid", 00:08:47.997 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:47.997 "strip_size_kb": 64, 00:08:47.997 "state": "configuring", 00:08:47.997 "raid_level": "raid0", 00:08:47.997 "superblock": true, 00:08:47.997 "num_base_bdevs": 3, 00:08:47.997 "num_base_bdevs_discovered": 2, 00:08:47.998 "num_base_bdevs_operational": 3, 00:08:47.998 "base_bdevs_list": [ 00:08:47.998 { 00:08:47.998 "name": "BaseBdev1", 00:08:47.998 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:47.998 "is_configured": true, 00:08:47.998 "data_offset": 2048, 00:08:47.998 "data_size": 63488 00:08:47.998 }, 00:08:47.998 { 00:08:47.998 "name": "BaseBdev2", 00:08:47.998 "uuid": "dac8cba0-38a3-4a9c-821c-7876fd28c0a5", 00:08:47.998 "is_configured": true, 00:08:47.998 "data_offset": 2048, 00:08:47.998 "data_size": 63488 00:08:47.998 }, 00:08:47.998 { 00:08:47.998 "name": "BaseBdev3", 00:08:47.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.998 "is_configured": false, 00:08:47.998 "data_offset": 0, 00:08:47.998 "data_size": 0 00:08:47.998 } 00:08:47.998 ] 00:08:47.998 }' 00:08:47.998 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.998 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 [2024-11-28 18:49:17.790307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.258 [2024-11-28 18:49:17.790890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:48.258 [2024-11-28 18:49:17.790954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.258 BaseBdev3 00:08:48.258 [2024-11-28 18:49:17.791945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.258 [2024-11-28 18:49:17.792371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:48.258 [2024-11-28 18:49:17.792480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.258 [2024-11-28 18:49:17.792813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 [ 00:08:48.258 { 00:08:48.258 "name": "BaseBdev3", 00:08:48.258 "aliases": [ 00:08:48.258 "f4a626b4-4bc0-4d98-b34c-d2b2e76bfe27" 00:08:48.258 ], 00:08:48.258 "product_name": "Malloc disk", 00:08:48.258 "block_size": 512, 00:08:48.258 "num_blocks": 65536, 00:08:48.258 "uuid": "f4a626b4-4bc0-4d98-b34c-d2b2e76bfe27", 00:08:48.258 "assigned_rate_limits": { 00:08:48.258 "rw_ios_per_sec": 0, 00:08:48.258 "rw_mbytes_per_sec": 0, 00:08:48.258 "r_mbytes_per_sec": 0, 00:08:48.258 "w_mbytes_per_sec": 0 00:08:48.258 }, 00:08:48.258 "claimed": true, 00:08:48.258 "claim_type": "exclusive_write", 00:08:48.258 "zoned": false, 00:08:48.258 "supported_io_types": { 00:08:48.258 "read": true, 00:08:48.258 "write": true, 00:08:48.258 "unmap": true, 00:08:48.258 "flush": true, 00:08:48.258 "reset": true, 00:08:48.258 "nvme_admin": false, 00:08:48.258 "nvme_io": false, 00:08:48.258 "nvme_io_md": false, 00:08:48.258 "write_zeroes": true, 00:08:48.258 "zcopy": true, 00:08:48.258 "get_zone_info": false, 00:08:48.258 "zone_management": false, 00:08:48.258 "zone_append": false, 00:08:48.258 "compare": false, 00:08:48.258 "compare_and_write": false, 00:08:48.258 "abort": true, 00:08:48.258 "seek_hole": false, 00:08:48.258 "seek_data": false, 00:08:48.258 "copy": true, 00:08:48.258 "nvme_iov_md": false 00:08:48.258 }, 00:08:48.258 "memory_domains": [ 00:08:48.258 { 00:08:48.258 "dma_device_id": "system", 00:08:48.258 "dma_device_type": 1 00:08:48.258 }, 00:08:48.258 { 00:08:48.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.258 "dma_device_type": 2 00:08:48.258 } 00:08:48.258 ], 00:08:48.258 "driver_specific": {} 00:08:48.258 } 00:08:48.258 ] 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.258 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.259 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.518 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.518 "name": "Existed_Raid", 00:08:48.518 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:48.518 "strip_size_kb": 64, 00:08:48.518 "state": "online", 00:08:48.518 "raid_level": "raid0", 00:08:48.518 "superblock": true, 00:08:48.518 "num_base_bdevs": 3, 00:08:48.518 "num_base_bdevs_discovered": 3, 00:08:48.518 "num_base_bdevs_operational": 3, 00:08:48.518 "base_bdevs_list": [ 00:08:48.518 { 00:08:48.518 "name": "BaseBdev1", 00:08:48.518 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:48.518 "is_configured": true, 00:08:48.518 "data_offset": 2048, 00:08:48.518 "data_size": 63488 00:08:48.518 }, 00:08:48.518 { 00:08:48.518 "name": "BaseBdev2", 00:08:48.518 "uuid": "dac8cba0-38a3-4a9c-821c-7876fd28c0a5", 00:08:48.518 "is_configured": true, 00:08:48.518 "data_offset": 2048, 00:08:48.518 "data_size": 63488 00:08:48.518 }, 00:08:48.518 { 00:08:48.519 "name": "BaseBdev3", 00:08:48.519 "uuid": "f4a626b4-4bc0-4d98-b34c-d2b2e76bfe27", 00:08:48.519 "is_configured": true, 00:08:48.519 "data_offset": 2048, 00:08:48.519 "data_size": 63488 00:08:48.519 } 00:08:48.519 ] 00:08:48.519 }' 00:08:48.519 18:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.519 18:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.778 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.778 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.778 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.779 [2024-11-28 18:49:18.294686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.779 "name": "Existed_Raid", 00:08:48.779 "aliases": [ 00:08:48.779 "e3482f53-3af7-4f21-81ca-332a10943151" 00:08:48.779 ], 00:08:48.779 "product_name": "Raid Volume", 00:08:48.779 "block_size": 512, 00:08:48.779 "num_blocks": 190464, 00:08:48.779 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:48.779 "assigned_rate_limits": { 00:08:48.779 "rw_ios_per_sec": 0, 00:08:48.779 "rw_mbytes_per_sec": 0, 00:08:48.779 "r_mbytes_per_sec": 0, 00:08:48.779 "w_mbytes_per_sec": 0 00:08:48.779 }, 00:08:48.779 "claimed": false, 00:08:48.779 "zoned": false, 00:08:48.779 "supported_io_types": { 00:08:48.779 "read": true, 00:08:48.779 "write": true, 00:08:48.779 "unmap": true, 00:08:48.779 "flush": true, 00:08:48.779 "reset": true, 00:08:48.779 "nvme_admin": false, 00:08:48.779 "nvme_io": false, 00:08:48.779 "nvme_io_md": false, 00:08:48.779 "write_zeroes": true, 00:08:48.779 "zcopy": false, 00:08:48.779 "get_zone_info": false, 00:08:48.779 "zone_management": false, 00:08:48.779 "zone_append": false, 00:08:48.779 "compare": false, 00:08:48.779 "compare_and_write": false, 00:08:48.779 "abort": false, 00:08:48.779 "seek_hole": false, 00:08:48.779 "seek_data": false, 00:08:48.779 "copy": false, 00:08:48.779 "nvme_iov_md": false 00:08:48.779 }, 00:08:48.779 "memory_domains": [ 00:08:48.779 { 00:08:48.779 "dma_device_id": "system", 00:08:48.779 "dma_device_type": 1 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.779 "dma_device_type": 2 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "dma_device_id": "system", 00:08:48.779 "dma_device_type": 1 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.779 "dma_device_type": 2 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "dma_device_id": "system", 00:08:48.779 "dma_device_type": 1 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.779 "dma_device_type": 2 00:08:48.779 } 00:08:48.779 ], 00:08:48.779 "driver_specific": { 00:08:48.779 "raid": { 00:08:48.779 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:48.779 "strip_size_kb": 64, 00:08:48.779 "state": "online", 00:08:48.779 "raid_level": "raid0", 00:08:48.779 "superblock": true, 00:08:48.779 "num_base_bdevs": 3, 00:08:48.779 "num_base_bdevs_discovered": 3, 00:08:48.779 "num_base_bdevs_operational": 3, 00:08:48.779 "base_bdevs_list": [ 00:08:48.779 { 00:08:48.779 "name": "BaseBdev1", 00:08:48.779 "uuid": "86c9e8ff-7b51-4823-b1a7-5e6d37d2098c", 00:08:48.779 "is_configured": true, 00:08:48.779 "data_offset": 2048, 00:08:48.779 "data_size": 63488 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "name": "BaseBdev2", 00:08:48.779 "uuid": "dac8cba0-38a3-4a9c-821c-7876fd28c0a5", 00:08:48.779 "is_configured": true, 00:08:48.779 "data_offset": 2048, 00:08:48.779 "data_size": 63488 00:08:48.779 }, 00:08:48.779 { 00:08:48.779 "name": "BaseBdev3", 00:08:48.779 "uuid": "f4a626b4-4bc0-4d98-b34c-d2b2e76bfe27", 00:08:48.779 "is_configured": true, 00:08:48.779 "data_offset": 2048, 00:08:48.779 "data_size": 63488 00:08:48.779 } 00:08:48.779 ] 00:08:48.779 } 00:08:48.779 } 00:08:48.779 }' 00:08:48.779 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.039 BaseBdev2 00:08:49.039 BaseBdev3' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.039 [2024-11-28 18:49:18.566534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.039 [2024-11-28 18:49:18.566570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.039 [2024-11-28 18:49:18.566618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.039 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.040 "name": "Existed_Raid", 00:08:49.040 "uuid": "e3482f53-3af7-4f21-81ca-332a10943151", 00:08:49.040 "strip_size_kb": 64, 00:08:49.040 "state": "offline", 00:08:49.040 "raid_level": "raid0", 00:08:49.040 "superblock": true, 00:08:49.040 "num_base_bdevs": 3, 00:08:49.040 "num_base_bdevs_discovered": 2, 00:08:49.040 "num_base_bdevs_operational": 2, 00:08:49.040 "base_bdevs_list": [ 00:08:49.040 { 00:08:49.040 "name": null, 00:08:49.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.040 "is_configured": false, 00:08:49.040 "data_offset": 0, 00:08:49.040 "data_size": 63488 00:08:49.040 }, 00:08:49.040 { 00:08:49.040 "name": "BaseBdev2", 00:08:49.040 "uuid": "dac8cba0-38a3-4a9c-821c-7876fd28c0a5", 00:08:49.040 "is_configured": true, 00:08:49.040 "data_offset": 2048, 00:08:49.040 "data_size": 63488 00:08:49.040 }, 00:08:49.040 { 00:08:49.040 "name": "BaseBdev3", 00:08:49.040 "uuid": "f4a626b4-4bc0-4d98-b34c-d2b2e76bfe27", 00:08:49.040 "is_configured": true, 00:08:49.040 "data_offset": 2048, 00:08:49.040 "data_size": 63488 00:08:49.040 } 00:08:49.040 ] 00:08:49.040 }' 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.040 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.610 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.610 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.610 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.610 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 18:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 [2024-11-28 18:49:19.041794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 [2024-11-28 18:49:19.108840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.611 [2024-11-28 18:49:19.108896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 [ 00:08:49.611 { 00:08:49.611 "name": "BaseBdev2", 00:08:49.611 "aliases": [ 00:08:49.611 "a8162931-875e-4c7c-80f9-8d8ec33b40b7" 00:08:49.611 ], 00:08:49.611 "product_name": "Malloc disk", 00:08:49.611 "block_size": 512, 00:08:49.611 "num_blocks": 65536, 00:08:49.611 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:49.611 "assigned_rate_limits": { 00:08:49.611 "rw_ios_per_sec": 0, 00:08:49.611 "rw_mbytes_per_sec": 0, 00:08:49.611 "r_mbytes_per_sec": 0, 00:08:49.611 "w_mbytes_per_sec": 0 00:08:49.611 }, 00:08:49.611 "claimed": false, 00:08:49.611 "zoned": false, 00:08:49.611 "supported_io_types": { 00:08:49.611 "read": true, 00:08:49.611 "write": true, 00:08:49.611 "unmap": true, 00:08:49.611 "flush": true, 00:08:49.611 "reset": true, 00:08:49.611 "nvme_admin": false, 00:08:49.611 "nvme_io": false, 00:08:49.611 "nvme_io_md": false, 00:08:49.611 "write_zeroes": true, 00:08:49.611 "zcopy": true, 00:08:49.611 "get_zone_info": false, 00:08:49.611 "zone_management": false, 00:08:49.611 "zone_append": false, 00:08:49.611 "compare": false, 00:08:49.611 "compare_and_write": false, 00:08:49.611 "abort": true, 00:08:49.611 "seek_hole": false, 00:08:49.611 "seek_data": false, 00:08:49.611 "copy": true, 00:08:49.611 "nvme_iov_md": false 00:08:49.611 }, 00:08:49.611 "memory_domains": [ 00:08:49.611 { 00:08:49.611 "dma_device_id": "system", 00:08:49.611 "dma_device_type": 1 00:08:49.611 }, 00:08:49.611 { 00:08:49.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.611 "dma_device_type": 2 00:08:49.611 } 00:08:49.611 ], 00:08:49.611 "driver_specific": {} 00:08:49.611 } 00:08:49.611 ] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.611 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 BaseBdev3 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 [ 00:08:49.872 { 00:08:49.872 "name": "BaseBdev3", 00:08:49.872 "aliases": [ 00:08:49.872 "1b8893e6-7def-4a21-b882-1f21f63f07fd" 00:08:49.872 ], 00:08:49.872 "product_name": "Malloc disk", 00:08:49.872 "block_size": 512, 00:08:49.872 "num_blocks": 65536, 00:08:49.872 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:49.872 "assigned_rate_limits": { 00:08:49.872 "rw_ios_per_sec": 0, 00:08:49.872 "rw_mbytes_per_sec": 0, 00:08:49.872 "r_mbytes_per_sec": 0, 00:08:49.872 "w_mbytes_per_sec": 0 00:08:49.872 }, 00:08:49.872 "claimed": false, 00:08:49.872 "zoned": false, 00:08:49.872 "supported_io_types": { 00:08:49.872 "read": true, 00:08:49.872 "write": true, 00:08:49.872 "unmap": true, 00:08:49.872 "flush": true, 00:08:49.872 "reset": true, 00:08:49.872 "nvme_admin": false, 00:08:49.872 "nvme_io": false, 00:08:49.872 "nvme_io_md": false, 00:08:49.872 "write_zeroes": true, 00:08:49.872 "zcopy": true, 00:08:49.872 "get_zone_info": false, 00:08:49.872 "zone_management": false, 00:08:49.872 "zone_append": false, 00:08:49.872 "compare": false, 00:08:49.872 "compare_and_write": false, 00:08:49.872 "abort": true, 00:08:49.872 "seek_hole": false, 00:08:49.872 "seek_data": false, 00:08:49.872 "copy": true, 00:08:49.872 "nvme_iov_md": false 00:08:49.872 }, 00:08:49.872 "memory_domains": [ 00:08:49.872 { 00:08:49.872 "dma_device_id": "system", 00:08:49.872 "dma_device_type": 1 00:08:49.872 }, 00:08:49.872 { 00:08:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.872 "dma_device_type": 2 00:08:49.872 } 00:08:49.872 ], 00:08:49.872 "driver_specific": {} 00:08:49.872 } 00:08:49.872 ] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 [2024-11-28 18:49:19.259665] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.872 [2024-11-28 18:49:19.259750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.872 [2024-11-28 18:49:19.259791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.872 [2024-11-28 18:49:19.261570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.872 "name": "Existed_Raid", 00:08:49.872 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:49.872 "strip_size_kb": 64, 00:08:49.872 "state": "configuring", 00:08:49.872 "raid_level": "raid0", 00:08:49.872 "superblock": true, 00:08:49.872 "num_base_bdevs": 3, 00:08:49.872 "num_base_bdevs_discovered": 2, 00:08:49.872 "num_base_bdevs_operational": 3, 00:08:49.872 "base_bdevs_list": [ 00:08:49.872 { 00:08:49.872 "name": "BaseBdev1", 00:08:49.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.872 "is_configured": false, 00:08:49.872 "data_offset": 0, 00:08:49.872 "data_size": 0 00:08:49.872 }, 00:08:49.872 { 00:08:49.872 "name": "BaseBdev2", 00:08:49.872 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:49.872 "is_configured": true, 00:08:49.872 "data_offset": 2048, 00:08:49.872 "data_size": 63488 00:08:49.872 }, 00:08:49.872 { 00:08:49.872 "name": "BaseBdev3", 00:08:49.872 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:49.872 "is_configured": true, 00:08:49.872 "data_offset": 2048, 00:08:49.872 "data_size": 63488 00:08:49.872 } 00:08:49.872 ] 00:08:49.872 }' 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.872 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.132 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:50.132 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.132 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.133 [2024-11-28 18:49:19.723780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.133 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.416 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.416 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.416 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.416 "name": "Existed_Raid", 00:08:50.416 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:50.416 "strip_size_kb": 64, 00:08:50.416 "state": "configuring", 00:08:50.416 "raid_level": "raid0", 00:08:50.416 "superblock": true, 00:08:50.416 "num_base_bdevs": 3, 00:08:50.416 "num_base_bdevs_discovered": 1, 00:08:50.416 "num_base_bdevs_operational": 3, 00:08:50.416 "base_bdevs_list": [ 00:08:50.416 { 00:08:50.416 "name": "BaseBdev1", 00:08:50.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.416 "is_configured": false, 00:08:50.416 "data_offset": 0, 00:08:50.416 "data_size": 0 00:08:50.416 }, 00:08:50.416 { 00:08:50.416 "name": null, 00:08:50.416 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:50.416 "is_configured": false, 00:08:50.416 "data_offset": 0, 00:08:50.416 "data_size": 63488 00:08:50.416 }, 00:08:50.416 { 00:08:50.416 "name": "BaseBdev3", 00:08:50.416 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:50.416 "is_configured": true, 00:08:50.416 "data_offset": 2048, 00:08:50.416 "data_size": 63488 00:08:50.416 } 00:08:50.416 ] 00:08:50.416 }' 00:08:50.416 18:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.416 18:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 [2024-11-28 18:49:20.174838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.686 BaseBdev1 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 [ 00:08:50.686 { 00:08:50.686 "name": "BaseBdev1", 00:08:50.686 "aliases": [ 00:08:50.686 "28fb0c10-9f6e-4071-91de-f6a5f52ce517" 00:08:50.686 ], 00:08:50.686 "product_name": "Malloc disk", 00:08:50.686 "block_size": 512, 00:08:50.686 "num_blocks": 65536, 00:08:50.686 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:50.686 "assigned_rate_limits": { 00:08:50.686 "rw_ios_per_sec": 0, 00:08:50.686 "rw_mbytes_per_sec": 0, 00:08:50.686 "r_mbytes_per_sec": 0, 00:08:50.686 "w_mbytes_per_sec": 0 00:08:50.686 }, 00:08:50.686 "claimed": true, 00:08:50.686 "claim_type": "exclusive_write", 00:08:50.686 "zoned": false, 00:08:50.686 "supported_io_types": { 00:08:50.686 "read": true, 00:08:50.686 "write": true, 00:08:50.686 "unmap": true, 00:08:50.686 "flush": true, 00:08:50.686 "reset": true, 00:08:50.686 "nvme_admin": false, 00:08:50.686 "nvme_io": false, 00:08:50.686 "nvme_io_md": false, 00:08:50.686 "write_zeroes": true, 00:08:50.686 "zcopy": true, 00:08:50.686 "get_zone_info": false, 00:08:50.686 "zone_management": false, 00:08:50.686 "zone_append": false, 00:08:50.686 "compare": false, 00:08:50.686 "compare_and_write": false, 00:08:50.686 "abort": true, 00:08:50.686 "seek_hole": false, 00:08:50.686 "seek_data": false, 00:08:50.686 "copy": true, 00:08:50.686 "nvme_iov_md": false 00:08:50.686 }, 00:08:50.686 "memory_domains": [ 00:08:50.686 { 00:08:50.686 "dma_device_id": "system", 00:08:50.686 "dma_device_type": 1 00:08:50.686 }, 00:08:50.686 { 00:08:50.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.686 "dma_device_type": 2 00:08:50.686 } 00:08:50.686 ], 00:08:50.686 "driver_specific": {} 00:08:50.686 } 00:08:50.686 ] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.686 "name": "Existed_Raid", 00:08:50.686 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:50.686 "strip_size_kb": 64, 00:08:50.686 "state": "configuring", 00:08:50.686 "raid_level": "raid0", 00:08:50.686 "superblock": true, 00:08:50.686 "num_base_bdevs": 3, 00:08:50.686 "num_base_bdevs_discovered": 2, 00:08:50.686 "num_base_bdevs_operational": 3, 00:08:50.686 "base_bdevs_list": [ 00:08:50.686 { 00:08:50.686 "name": "BaseBdev1", 00:08:50.686 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:50.686 "is_configured": true, 00:08:50.686 "data_offset": 2048, 00:08:50.686 "data_size": 63488 00:08:50.686 }, 00:08:50.686 { 00:08:50.686 "name": null, 00:08:50.686 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:50.686 "is_configured": false, 00:08:50.686 "data_offset": 0, 00:08:50.686 "data_size": 63488 00:08:50.686 }, 00:08:50.686 { 00:08:50.686 "name": "BaseBdev3", 00:08:50.686 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:50.686 "is_configured": true, 00:08:50.686 "data_offset": 2048, 00:08:50.686 "data_size": 63488 00:08:50.686 } 00:08:50.686 ] 00:08:50.686 }' 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.686 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.257 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.257 [2024-11-28 18:49:20.663005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.258 "name": "Existed_Raid", 00:08:51.258 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:51.258 "strip_size_kb": 64, 00:08:51.258 "state": "configuring", 00:08:51.258 "raid_level": "raid0", 00:08:51.258 "superblock": true, 00:08:51.258 "num_base_bdevs": 3, 00:08:51.258 "num_base_bdevs_discovered": 1, 00:08:51.258 "num_base_bdevs_operational": 3, 00:08:51.258 "base_bdevs_list": [ 00:08:51.258 { 00:08:51.258 "name": "BaseBdev1", 00:08:51.258 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:51.258 "is_configured": true, 00:08:51.258 "data_offset": 2048, 00:08:51.258 "data_size": 63488 00:08:51.258 }, 00:08:51.258 { 00:08:51.258 "name": null, 00:08:51.258 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:51.258 "is_configured": false, 00:08:51.258 "data_offset": 0, 00:08:51.258 "data_size": 63488 00:08:51.258 }, 00:08:51.258 { 00:08:51.258 "name": null, 00:08:51.258 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:51.258 "is_configured": false, 00:08:51.258 "data_offset": 0, 00:08:51.258 "data_size": 63488 00:08:51.258 } 00:08:51.258 ] 00:08:51.258 }' 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.258 18:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.827 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.827 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.827 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.827 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.827 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.828 [2024-11-28 18:49:21.171177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.828 "name": "Existed_Raid", 00:08:51.828 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:51.828 "strip_size_kb": 64, 00:08:51.828 "state": "configuring", 00:08:51.828 "raid_level": "raid0", 00:08:51.828 "superblock": true, 00:08:51.828 "num_base_bdevs": 3, 00:08:51.828 "num_base_bdevs_discovered": 2, 00:08:51.828 "num_base_bdevs_operational": 3, 00:08:51.828 "base_bdevs_list": [ 00:08:51.828 { 00:08:51.828 "name": "BaseBdev1", 00:08:51.828 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:51.828 "is_configured": true, 00:08:51.828 "data_offset": 2048, 00:08:51.828 "data_size": 63488 00:08:51.828 }, 00:08:51.828 { 00:08:51.828 "name": null, 00:08:51.828 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:51.828 "is_configured": false, 00:08:51.828 "data_offset": 0, 00:08:51.828 "data_size": 63488 00:08:51.828 }, 00:08:51.828 { 00:08:51.828 "name": "BaseBdev3", 00:08:51.828 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:51.828 "is_configured": true, 00:08:51.828 "data_offset": 2048, 00:08:51.828 "data_size": 63488 00:08:51.828 } 00:08:51.828 ] 00:08:51.828 }' 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.828 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 [2024-11-28 18:49:21.607320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.088 "name": "Existed_Raid", 00:08:52.088 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:52.088 "strip_size_kb": 64, 00:08:52.088 "state": "configuring", 00:08:52.088 "raid_level": "raid0", 00:08:52.088 "superblock": true, 00:08:52.088 "num_base_bdevs": 3, 00:08:52.088 "num_base_bdevs_discovered": 1, 00:08:52.088 "num_base_bdevs_operational": 3, 00:08:52.088 "base_bdevs_list": [ 00:08:52.088 { 00:08:52.088 "name": null, 00:08:52.088 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:52.088 "is_configured": false, 00:08:52.088 "data_offset": 0, 00:08:52.088 "data_size": 63488 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "name": null, 00:08:52.088 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:52.088 "is_configured": false, 00:08:52.088 "data_offset": 0, 00:08:52.088 "data_size": 63488 00:08:52.088 }, 00:08:52.088 { 00:08:52.088 "name": "BaseBdev3", 00:08:52.088 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:52.088 "is_configured": true, 00:08:52.088 "data_offset": 2048, 00:08:52.088 "data_size": 63488 00:08:52.088 } 00:08:52.088 ] 00:08:52.088 }' 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.088 18:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.657 [2024-11-28 18:49:22.069929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.657 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.658 "name": "Existed_Raid", 00:08:52.658 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:52.658 "strip_size_kb": 64, 00:08:52.658 "state": "configuring", 00:08:52.658 "raid_level": "raid0", 00:08:52.658 "superblock": true, 00:08:52.658 "num_base_bdevs": 3, 00:08:52.658 "num_base_bdevs_discovered": 2, 00:08:52.658 "num_base_bdevs_operational": 3, 00:08:52.658 "base_bdevs_list": [ 00:08:52.658 { 00:08:52.658 "name": null, 00:08:52.658 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:52.658 "is_configured": false, 00:08:52.658 "data_offset": 0, 00:08:52.658 "data_size": 63488 00:08:52.658 }, 00:08:52.658 { 00:08:52.658 "name": "BaseBdev2", 00:08:52.658 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:52.658 "is_configured": true, 00:08:52.658 "data_offset": 2048, 00:08:52.658 "data_size": 63488 00:08:52.658 }, 00:08:52.658 { 00:08:52.658 "name": "BaseBdev3", 00:08:52.658 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:52.658 "is_configured": true, 00:08:52.658 "data_offset": 2048, 00:08:52.658 "data_size": 63488 00:08:52.658 } 00:08:52.658 ] 00:08:52.658 }' 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.658 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.917 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.176 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.176 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28fb0c10-9f6e-4071-91de-f6a5f52ce517 00:08:53.176 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.176 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.176 NewBaseBdev 00:08:53.176 [2024-11-28 18:49:22.572944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:53.176 [2024-11-28 18:49:22.573109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.176 [2024-11-28 18:49:22.573121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.176 [2024-11-28 18:49:22.573358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:53.176 [2024-11-28 18:49:22.573486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.176 [2024-11-28 18:49:22.573502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:53.176 [2024-11-28 18:49:22.573613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 [ 00:08:53.177 { 00:08:53.177 "name": "NewBaseBdev", 00:08:53.177 "aliases": [ 00:08:53.177 "28fb0c10-9f6e-4071-91de-f6a5f52ce517" 00:08:53.177 ], 00:08:53.177 "product_name": "Malloc disk", 00:08:53.177 "block_size": 512, 00:08:53.177 "num_blocks": 65536, 00:08:53.177 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:53.177 "assigned_rate_limits": { 00:08:53.177 "rw_ios_per_sec": 0, 00:08:53.177 "rw_mbytes_per_sec": 0, 00:08:53.177 "r_mbytes_per_sec": 0, 00:08:53.177 "w_mbytes_per_sec": 0 00:08:53.177 }, 00:08:53.177 "claimed": true, 00:08:53.177 "claim_type": "exclusive_write", 00:08:53.177 "zoned": false, 00:08:53.177 "supported_io_types": { 00:08:53.177 "read": true, 00:08:53.177 "write": true, 00:08:53.177 "unmap": true, 00:08:53.177 "flush": true, 00:08:53.177 "reset": true, 00:08:53.177 "nvme_admin": false, 00:08:53.177 "nvme_io": false, 00:08:53.177 "nvme_io_md": false, 00:08:53.177 "write_zeroes": true, 00:08:53.177 "zcopy": true, 00:08:53.177 "get_zone_info": false, 00:08:53.177 "zone_management": false, 00:08:53.177 "zone_append": false, 00:08:53.177 "compare": false, 00:08:53.177 "compare_and_write": false, 00:08:53.177 "abort": true, 00:08:53.177 "seek_hole": false, 00:08:53.177 "seek_data": false, 00:08:53.177 "copy": true, 00:08:53.177 "nvme_iov_md": false 00:08:53.177 }, 00:08:53.177 "memory_domains": [ 00:08:53.177 { 00:08:53.177 "dma_device_id": "system", 00:08:53.177 "dma_device_type": 1 00:08:53.177 }, 00:08:53.177 { 00:08:53.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.177 "dma_device_type": 2 00:08:53.177 } 00:08:53.177 ], 00:08:53.177 "driver_specific": {} 00:08:53.177 } 00:08:53.177 ] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.177 "name": "Existed_Raid", 00:08:53.177 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:53.177 "strip_size_kb": 64, 00:08:53.177 "state": "online", 00:08:53.177 "raid_level": "raid0", 00:08:53.177 "superblock": true, 00:08:53.177 "num_base_bdevs": 3, 00:08:53.177 "num_base_bdevs_discovered": 3, 00:08:53.177 "num_base_bdevs_operational": 3, 00:08:53.177 "base_bdevs_list": [ 00:08:53.177 { 00:08:53.177 "name": "NewBaseBdev", 00:08:53.177 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:53.177 "is_configured": true, 00:08:53.177 "data_offset": 2048, 00:08:53.177 "data_size": 63488 00:08:53.177 }, 00:08:53.177 { 00:08:53.177 "name": "BaseBdev2", 00:08:53.177 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:53.177 "is_configured": true, 00:08:53.177 "data_offset": 2048, 00:08:53.177 "data_size": 63488 00:08:53.177 }, 00:08:53.177 { 00:08:53.177 "name": "BaseBdev3", 00:08:53.177 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:53.177 "is_configured": true, 00:08:53.177 "data_offset": 2048, 00:08:53.177 "data_size": 63488 00:08:53.177 } 00:08:53.177 ] 00:08:53.177 }' 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.177 18:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.748 [2024-11-28 18:49:23.065386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.748 "name": "Existed_Raid", 00:08:53.748 "aliases": [ 00:08:53.748 "37e5da3a-182d-41cc-889b-84d128ef69e9" 00:08:53.748 ], 00:08:53.748 "product_name": "Raid Volume", 00:08:53.748 "block_size": 512, 00:08:53.748 "num_blocks": 190464, 00:08:53.748 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:53.748 "assigned_rate_limits": { 00:08:53.748 "rw_ios_per_sec": 0, 00:08:53.748 "rw_mbytes_per_sec": 0, 00:08:53.748 "r_mbytes_per_sec": 0, 00:08:53.748 "w_mbytes_per_sec": 0 00:08:53.748 }, 00:08:53.748 "claimed": false, 00:08:53.748 "zoned": false, 00:08:53.748 "supported_io_types": { 00:08:53.748 "read": true, 00:08:53.748 "write": true, 00:08:53.748 "unmap": true, 00:08:53.748 "flush": true, 00:08:53.748 "reset": true, 00:08:53.748 "nvme_admin": false, 00:08:53.748 "nvme_io": false, 00:08:53.748 "nvme_io_md": false, 00:08:53.748 "write_zeroes": true, 00:08:53.748 "zcopy": false, 00:08:53.748 "get_zone_info": false, 00:08:53.748 "zone_management": false, 00:08:53.748 "zone_append": false, 00:08:53.748 "compare": false, 00:08:53.748 "compare_and_write": false, 00:08:53.748 "abort": false, 00:08:53.748 "seek_hole": false, 00:08:53.748 "seek_data": false, 00:08:53.748 "copy": false, 00:08:53.748 "nvme_iov_md": false 00:08:53.748 }, 00:08:53.748 "memory_domains": [ 00:08:53.748 { 00:08:53.748 "dma_device_id": "system", 00:08:53.748 "dma_device_type": 1 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.748 "dma_device_type": 2 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "dma_device_id": "system", 00:08:53.748 "dma_device_type": 1 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.748 "dma_device_type": 2 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "dma_device_id": "system", 00:08:53.748 "dma_device_type": 1 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.748 "dma_device_type": 2 00:08:53.748 } 00:08:53.748 ], 00:08:53.748 "driver_specific": { 00:08:53.748 "raid": { 00:08:53.748 "uuid": "37e5da3a-182d-41cc-889b-84d128ef69e9", 00:08:53.748 "strip_size_kb": 64, 00:08:53.748 "state": "online", 00:08:53.748 "raid_level": "raid0", 00:08:53.748 "superblock": true, 00:08:53.748 "num_base_bdevs": 3, 00:08:53.748 "num_base_bdevs_discovered": 3, 00:08:53.748 "num_base_bdevs_operational": 3, 00:08:53.748 "base_bdevs_list": [ 00:08:53.748 { 00:08:53.748 "name": "NewBaseBdev", 00:08:53.748 "uuid": "28fb0c10-9f6e-4071-91de-f6a5f52ce517", 00:08:53.748 "is_configured": true, 00:08:53.748 "data_offset": 2048, 00:08:53.748 "data_size": 63488 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "name": "BaseBdev2", 00:08:53.748 "uuid": "a8162931-875e-4c7c-80f9-8d8ec33b40b7", 00:08:53.748 "is_configured": true, 00:08:53.748 "data_offset": 2048, 00:08:53.748 "data_size": 63488 00:08:53.748 }, 00:08:53.748 { 00:08:53.748 "name": "BaseBdev3", 00:08:53.748 "uuid": "1b8893e6-7def-4a21-b882-1f21f63f07fd", 00:08:53.748 "is_configured": true, 00:08:53.748 "data_offset": 2048, 00:08:53.748 "data_size": 63488 00:08:53.748 } 00:08:53.748 ] 00:08:53.748 } 00:08:53.748 } 00:08:53.748 }' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:53.748 BaseBdev2 00:08:53.748 BaseBdev3' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.748 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.749 [2024-11-28 18:49:23.325174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.749 [2024-11-28 18:49:23.325242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.749 [2024-11-28 18:49:23.325329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.749 [2024-11-28 18:49:23.325407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.749 [2024-11-28 18:49:23.325472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77180 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77180 ']' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77180 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.749 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77180 00:08:54.009 killing process with pid 77180 00:08:54.009 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.009 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.009 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77180' 00:08:54.009 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77180 00:08:54.009 [2024-11-28 18:49:23.374699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.009 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77180 00:08:54.009 [2024-11-28 18:49:23.404366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.270 ************************************ 00:08:54.270 END TEST raid_state_function_test_sb 00:08:54.270 ************************************ 00:08:54.270 18:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.270 00:08:54.270 real 0m8.629s 00:08:54.270 user 0m14.869s 00:08:54.270 sys 0m1.635s 00:08:54.270 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.270 18:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 18:49:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:54.270 18:49:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.270 18:49:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.270 18:49:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 ************************************ 00:08:54.270 START TEST raid_superblock_test 00:08:54.270 ************************************ 00:08:54.270 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:54.270 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:54.270 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:54.270 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:54.270 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77782 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77782 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77782 ']' 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.271 18:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.271 [2024-11-28 18:49:23.779555] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:54.271 [2024-11-28 18:49:23.779763] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77782 ] 00:08:54.531 [2024-11-28 18:49:23.913268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.531 [2024-11-28 18:49:23.953314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.531 [2024-11-28 18:49:23.977910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.531 [2024-11-28 18:49:24.019529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.531 [2024-11-28 18:49:24.019637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 malloc1 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 [2024-11-28 18:49:24.616508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.101 [2024-11-28 18:49:24.616621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.101 [2024-11-28 18:49:24.616670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:55.101 [2024-11-28 18:49:24.616728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.101 [2024-11-28 18:49:24.618807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.101 [2024-11-28 18:49:24.618892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.101 pt1 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 malloc2 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 [2024-11-28 18:49:24.648954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.101 [2024-11-28 18:49:24.649054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.101 [2024-11-28 18:49:24.649076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:55.101 [2024-11-28 18:49:24.649084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.101 [2024-11-28 18:49:24.651132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.101 [2024-11-28 18:49:24.651164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.101 pt2 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 malloc3 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 [2024-11-28 18:49:24.677466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.101 [2024-11-28 18:49:24.677563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.101 [2024-11-28 18:49:24.677600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:55.101 [2024-11-28 18:49:24.677627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.101 [2024-11-28 18:49:24.679668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.101 [2024-11-28 18:49:24.679732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.101 pt3 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.101 [2024-11-28 18:49:24.689500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.101 [2024-11-28 18:49:24.691343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.101 [2024-11-28 18:49:24.691449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.101 [2024-11-28 18:49:24.691622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:55.101 [2024-11-28 18:49:24.691670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.101 [2024-11-28 18:49:24.691933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:55.101 [2024-11-28 18:49:24.692110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:55.101 [2024-11-28 18:49:24.692158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:55.101 [2024-11-28 18:49:24.692325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.101 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.102 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.361 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.361 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.361 "name": "raid_bdev1", 00:08:55.361 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:55.361 "strip_size_kb": 64, 00:08:55.361 "state": "online", 00:08:55.361 "raid_level": "raid0", 00:08:55.361 "superblock": true, 00:08:55.361 "num_base_bdevs": 3, 00:08:55.361 "num_base_bdevs_discovered": 3, 00:08:55.361 "num_base_bdevs_operational": 3, 00:08:55.361 "base_bdevs_list": [ 00:08:55.361 { 00:08:55.361 "name": "pt1", 00:08:55.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.361 "is_configured": true, 00:08:55.361 "data_offset": 2048, 00:08:55.361 "data_size": 63488 00:08:55.361 }, 00:08:55.361 { 00:08:55.361 "name": "pt2", 00:08:55.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.361 "is_configured": true, 00:08:55.361 "data_offset": 2048, 00:08:55.361 "data_size": 63488 00:08:55.361 }, 00:08:55.361 { 00:08:55.361 "name": "pt3", 00:08:55.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.361 "is_configured": true, 00:08:55.361 "data_offset": 2048, 00:08:55.361 "data_size": 63488 00:08:55.361 } 00:08:55.361 ] 00:08:55.361 }' 00:08:55.361 18:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.361 18:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.627 [2024-11-28 18:49:25.137884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.627 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.627 "name": "raid_bdev1", 00:08:55.627 "aliases": [ 00:08:55.627 "0aae7766-7002-4934-b61e-49bee3bcd66d" 00:08:55.627 ], 00:08:55.627 "product_name": "Raid Volume", 00:08:55.627 "block_size": 512, 00:08:55.627 "num_blocks": 190464, 00:08:55.627 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:55.627 "assigned_rate_limits": { 00:08:55.627 "rw_ios_per_sec": 0, 00:08:55.627 "rw_mbytes_per_sec": 0, 00:08:55.627 "r_mbytes_per_sec": 0, 00:08:55.627 "w_mbytes_per_sec": 0 00:08:55.627 }, 00:08:55.627 "claimed": false, 00:08:55.627 "zoned": false, 00:08:55.627 "supported_io_types": { 00:08:55.627 "read": true, 00:08:55.627 "write": true, 00:08:55.627 "unmap": true, 00:08:55.627 "flush": true, 00:08:55.627 "reset": true, 00:08:55.627 "nvme_admin": false, 00:08:55.627 "nvme_io": false, 00:08:55.627 "nvme_io_md": false, 00:08:55.627 "write_zeroes": true, 00:08:55.627 "zcopy": false, 00:08:55.627 "get_zone_info": false, 00:08:55.627 "zone_management": false, 00:08:55.627 "zone_append": false, 00:08:55.627 "compare": false, 00:08:55.627 "compare_and_write": false, 00:08:55.627 "abort": false, 00:08:55.627 "seek_hole": false, 00:08:55.628 "seek_data": false, 00:08:55.628 "copy": false, 00:08:55.628 "nvme_iov_md": false 00:08:55.628 }, 00:08:55.628 "memory_domains": [ 00:08:55.628 { 00:08:55.628 "dma_device_id": "system", 00:08:55.628 "dma_device_type": 1 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.628 "dma_device_type": 2 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "dma_device_id": "system", 00:08:55.628 "dma_device_type": 1 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.628 "dma_device_type": 2 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "dma_device_id": "system", 00:08:55.628 "dma_device_type": 1 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.628 "dma_device_type": 2 00:08:55.628 } 00:08:55.628 ], 00:08:55.628 "driver_specific": { 00:08:55.628 "raid": { 00:08:55.628 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:55.628 "strip_size_kb": 64, 00:08:55.628 "state": "online", 00:08:55.628 "raid_level": "raid0", 00:08:55.628 "superblock": true, 00:08:55.628 "num_base_bdevs": 3, 00:08:55.628 "num_base_bdevs_discovered": 3, 00:08:55.628 "num_base_bdevs_operational": 3, 00:08:55.628 "base_bdevs_list": [ 00:08:55.628 { 00:08:55.628 "name": "pt1", 00:08:55.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.628 "is_configured": true, 00:08:55.628 "data_offset": 2048, 00:08:55.628 "data_size": 63488 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "name": "pt2", 00:08:55.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.628 "is_configured": true, 00:08:55.628 "data_offset": 2048, 00:08:55.628 "data_size": 63488 00:08:55.628 }, 00:08:55.628 { 00:08:55.628 "name": "pt3", 00:08:55.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.628 "is_configured": true, 00:08:55.628 "data_offset": 2048, 00:08:55.628 "data_size": 63488 00:08:55.628 } 00:08:55.628 ] 00:08:55.628 } 00:08:55.628 } 00:08:55.628 }' 00:08:55.628 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.628 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:55.628 pt2 00:08:55.628 pt3' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:55.889 [2024-11-28 18:49:25.437948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0aae7766-7002-4934-b61e-49bee3bcd66d 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0aae7766-7002-4934-b61e-49bee3bcd66d ']' 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.889 [2024-11-28 18:49:25.485678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.889 [2024-11-28 18:49:25.485704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.889 [2024-11-28 18:49:25.485787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.889 [2024-11-28 18:49:25.485851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.889 [2024-11-28 18:49:25.485863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:55.889 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 [2024-11-28 18:49:25.621757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:56.150 [2024-11-28 18:49:25.623623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:56.150 [2024-11-28 18:49:25.623674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:56.150 [2024-11-28 18:49:25.623718] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:56.150 [2024-11-28 18:49:25.623774] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:56.150 [2024-11-28 18:49:25.623805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:56.150 [2024-11-28 18:49:25.623818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.150 [2024-11-28 18:49:25.623827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:56.150 request: 00:08:56.150 { 00:08:56.150 "name": "raid_bdev1", 00:08:56.150 "raid_level": "raid0", 00:08:56.150 "base_bdevs": [ 00:08:56.150 "malloc1", 00:08:56.150 "malloc2", 00:08:56.150 "malloc3" 00:08:56.150 ], 00:08:56.150 "strip_size_kb": 64, 00:08:56.150 "superblock": false, 00:08:56.150 "method": "bdev_raid_create", 00:08:56.150 "req_id": 1 00:08:56.150 } 00:08:56.150 Got JSON-RPC error response 00:08:56.150 response: 00:08:56.150 { 00:08:56.150 "code": -17, 00:08:56.150 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:56.150 } 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.151 [2024-11-28 18:49:25.669718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:56.151 [2024-11-28 18:49:25.669824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.151 [2024-11-28 18:49:25.669862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:56.151 [2024-11-28 18:49:25.669889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.151 [2024-11-28 18:49:25.671935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.151 [2024-11-28 18:49:25.672004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:56.151 [2024-11-28 18:49:25.672091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:56.151 [2024-11-28 18:49:25.672159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:56.151 pt1 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.151 "name": "raid_bdev1", 00:08:56.151 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:56.151 "strip_size_kb": 64, 00:08:56.151 "state": "configuring", 00:08:56.151 "raid_level": "raid0", 00:08:56.151 "superblock": true, 00:08:56.151 "num_base_bdevs": 3, 00:08:56.151 "num_base_bdevs_discovered": 1, 00:08:56.151 "num_base_bdevs_operational": 3, 00:08:56.151 "base_bdevs_list": [ 00:08:56.151 { 00:08:56.151 "name": "pt1", 00:08:56.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.151 "is_configured": true, 00:08:56.151 "data_offset": 2048, 00:08:56.151 "data_size": 63488 00:08:56.151 }, 00:08:56.151 { 00:08:56.151 "name": null, 00:08:56.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.151 "is_configured": false, 00:08:56.151 "data_offset": 2048, 00:08:56.151 "data_size": 63488 00:08:56.151 }, 00:08:56.151 { 00:08:56.151 "name": null, 00:08:56.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.151 "is_configured": false, 00:08:56.151 "data_offset": 2048, 00:08:56.151 "data_size": 63488 00:08:56.151 } 00:08:56.151 ] 00:08:56.151 }' 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.151 18:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 [2024-11-28 18:49:26.141866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.721 [2024-11-28 18:49:26.141975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.721 [2024-11-28 18:49:26.142018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:56.721 [2024-11-28 18:49:26.142045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.721 [2024-11-28 18:49:26.142452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.721 [2024-11-28 18:49:26.142505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.721 [2024-11-28 18:49:26.142595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:56.721 [2024-11-28 18:49:26.142651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.721 pt2 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 [2024-11-28 18:49:26.149907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.721 "name": "raid_bdev1", 00:08:56.721 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:56.721 "strip_size_kb": 64, 00:08:56.721 "state": "configuring", 00:08:56.721 "raid_level": "raid0", 00:08:56.721 "superblock": true, 00:08:56.721 "num_base_bdevs": 3, 00:08:56.721 "num_base_bdevs_discovered": 1, 00:08:56.721 "num_base_bdevs_operational": 3, 00:08:56.721 "base_bdevs_list": [ 00:08:56.721 { 00:08:56.721 "name": "pt1", 00:08:56.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.721 "is_configured": true, 00:08:56.721 "data_offset": 2048, 00:08:56.721 "data_size": 63488 00:08:56.721 }, 00:08:56.721 { 00:08:56.721 "name": null, 00:08:56.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.721 "is_configured": false, 00:08:56.721 "data_offset": 0, 00:08:56.721 "data_size": 63488 00:08:56.721 }, 00:08:56.721 { 00:08:56.721 "name": null, 00:08:56.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.721 "is_configured": false, 00:08:56.721 "data_offset": 2048, 00:08:56.721 "data_size": 63488 00:08:56.721 } 00:08:56.721 ] 00:08:56.721 }' 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.721 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.981 [2024-11-28 18:49:26.577983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.981 [2024-11-28 18:49:26.578076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.981 [2024-11-28 18:49:26.578109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:56.981 [2024-11-28 18:49:26.578138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.981 [2024-11-28 18:49:26.578569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.981 [2024-11-28 18:49:26.578627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.981 [2024-11-28 18:49:26.578712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:56.981 [2024-11-28 18:49:26.578769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.981 pt2 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.981 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.242 [2024-11-28 18:49:26.585977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.242 [2024-11-28 18:49:26.586069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.242 [2024-11-28 18:49:26.586114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:57.242 [2024-11-28 18:49:26.586154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.242 [2024-11-28 18:49:26.586528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.242 [2024-11-28 18:49:26.586583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.242 [2024-11-28 18:49:26.586665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:57.242 [2024-11-28 18:49:26.586712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.242 [2024-11-28 18:49:26.586823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:57.242 [2024-11-28 18:49:26.586863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.242 [2024-11-28 18:49:26.587125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:57.242 [2024-11-28 18:49:26.587303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:57.242 [2024-11-28 18:49:26.587344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:57.242 [2024-11-28 18:49:26.587506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.242 pt3 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.242 "name": "raid_bdev1", 00:08:57.242 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:57.242 "strip_size_kb": 64, 00:08:57.242 "state": "online", 00:08:57.242 "raid_level": "raid0", 00:08:57.242 "superblock": true, 00:08:57.242 "num_base_bdevs": 3, 00:08:57.242 "num_base_bdevs_discovered": 3, 00:08:57.242 "num_base_bdevs_operational": 3, 00:08:57.242 "base_bdevs_list": [ 00:08:57.242 { 00:08:57.242 "name": "pt1", 00:08:57.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.242 "is_configured": true, 00:08:57.242 "data_offset": 2048, 00:08:57.242 "data_size": 63488 00:08:57.242 }, 00:08:57.242 { 00:08:57.242 "name": "pt2", 00:08:57.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.242 "is_configured": true, 00:08:57.242 "data_offset": 2048, 00:08:57.242 "data_size": 63488 00:08:57.242 }, 00:08:57.242 { 00:08:57.242 "name": "pt3", 00:08:57.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.242 "is_configured": true, 00:08:57.242 "data_offset": 2048, 00:08:57.242 "data_size": 63488 00:08:57.242 } 00:08:57.242 ] 00:08:57.242 }' 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.242 18:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.503 18:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.503 [2024-11-28 18:49:27.010384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.503 "name": "raid_bdev1", 00:08:57.503 "aliases": [ 00:08:57.503 "0aae7766-7002-4934-b61e-49bee3bcd66d" 00:08:57.503 ], 00:08:57.503 "product_name": "Raid Volume", 00:08:57.503 "block_size": 512, 00:08:57.503 "num_blocks": 190464, 00:08:57.503 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:57.503 "assigned_rate_limits": { 00:08:57.503 "rw_ios_per_sec": 0, 00:08:57.503 "rw_mbytes_per_sec": 0, 00:08:57.503 "r_mbytes_per_sec": 0, 00:08:57.503 "w_mbytes_per_sec": 0 00:08:57.503 }, 00:08:57.503 "claimed": false, 00:08:57.503 "zoned": false, 00:08:57.503 "supported_io_types": { 00:08:57.503 "read": true, 00:08:57.503 "write": true, 00:08:57.503 "unmap": true, 00:08:57.503 "flush": true, 00:08:57.503 "reset": true, 00:08:57.503 "nvme_admin": false, 00:08:57.503 "nvme_io": false, 00:08:57.503 "nvme_io_md": false, 00:08:57.503 "write_zeroes": true, 00:08:57.503 "zcopy": false, 00:08:57.503 "get_zone_info": false, 00:08:57.503 "zone_management": false, 00:08:57.503 "zone_append": false, 00:08:57.503 "compare": false, 00:08:57.503 "compare_and_write": false, 00:08:57.503 "abort": false, 00:08:57.503 "seek_hole": false, 00:08:57.503 "seek_data": false, 00:08:57.503 "copy": false, 00:08:57.503 "nvme_iov_md": false 00:08:57.503 }, 00:08:57.503 "memory_domains": [ 00:08:57.503 { 00:08:57.503 "dma_device_id": "system", 00:08:57.503 "dma_device_type": 1 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.503 "dma_device_type": 2 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "dma_device_id": "system", 00:08:57.503 "dma_device_type": 1 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.503 "dma_device_type": 2 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "dma_device_id": "system", 00:08:57.503 "dma_device_type": 1 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.503 "dma_device_type": 2 00:08:57.503 } 00:08:57.503 ], 00:08:57.503 "driver_specific": { 00:08:57.503 "raid": { 00:08:57.503 "uuid": "0aae7766-7002-4934-b61e-49bee3bcd66d", 00:08:57.503 "strip_size_kb": 64, 00:08:57.503 "state": "online", 00:08:57.503 "raid_level": "raid0", 00:08:57.503 "superblock": true, 00:08:57.503 "num_base_bdevs": 3, 00:08:57.503 "num_base_bdevs_discovered": 3, 00:08:57.503 "num_base_bdevs_operational": 3, 00:08:57.503 "base_bdevs_list": [ 00:08:57.503 { 00:08:57.503 "name": "pt1", 00:08:57.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "name": "pt2", 00:08:57.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "name": "pt3", 00:08:57.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 } 00:08:57.503 ] 00:08:57.503 } 00:08:57.503 } 00:08:57.503 }' 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.503 pt2 00:08:57.503 pt3' 00:08:57.503 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 [2024-11-28 18:49:27.274447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0aae7766-7002-4934-b61e-49bee3bcd66d '!=' 0aae7766-7002-4934-b61e-49bee3bcd66d ']' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77782 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77782 ']' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77782 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77782 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77782' 00:08:57.764 killing process with pid 77782 00:08:57.764 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77782 00:08:57.764 [2024-11-28 18:49:27.340849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.764 [2024-11-28 18:49:27.340980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.764 [2024-11-28 18:49:27.341061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77782 00:08:57.764 ee all in destruct 00:08:57.764 [2024-11-28 18:49:27.341129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:58.024 [2024-11-28 18:49:27.373381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.025 18:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:58.025 00:08:58.025 real 0m3.898s 00:08:58.025 user 0m6.208s 00:08:58.025 sys 0m0.822s 00:08:58.025 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.025 ************************************ 00:08:58.025 END TEST raid_superblock_test 00:08:58.025 ************************************ 00:08:58.025 18:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.285 18:49:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:58.285 18:49:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.285 18:49:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.285 18:49:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.285 ************************************ 00:08:58.285 START TEST raid_read_error_test 00:08:58.285 ************************************ 00:08:58.285 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:58.285 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:58.285 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.285 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.suftdLctDW 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78020 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78020 00:08:58.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78020 ']' 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.286 18:49:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.286 [2024-11-28 18:49:27.768666] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:58.286 [2024-11-28 18:49:27.768875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78020 ] 00:08:58.546 [2024-11-28 18:49:27.903593] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.546 [2024-11-28 18:49:27.942151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.546 [2024-11-28 18:49:27.967887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.546 [2024-11-28 18:49:28.010392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.546 [2024-11-28 18:49:28.010439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 BaseBdev1_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 true 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 [2024-11-28 18:49:28.618507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.118 [2024-11-28 18:49:28.618558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.118 [2024-11-28 18:49:28.618594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.118 [2024-11-28 18:49:28.618608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.118 [2024-11-28 18:49:28.620702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.118 [2024-11-28 18:49:28.620779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.118 BaseBdev1 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 BaseBdev2_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 true 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 [2024-11-28 18:49:28.658923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.118 [2024-11-28 18:49:28.659027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.118 [2024-11-28 18:49:28.659047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.118 [2024-11-28 18:49:28.659057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.118 [2024-11-28 18:49:28.661060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.118 [2024-11-28 18:49:28.661108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.118 BaseBdev2 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 BaseBdev3_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 true 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 [2024-11-28 18:49:28.699294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:59.118 [2024-11-28 18:49:28.699341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.118 [2024-11-28 18:49:28.699373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:59.118 [2024-11-28 18:49:28.699383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.118 [2024-11-28 18:49:28.701354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.118 [2024-11-28 18:49:28.701390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:59.118 BaseBdev3 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.118 [2024-11-28 18:49:28.711354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.118 [2024-11-28 18:49:28.713122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.118 [2024-11-28 18:49:28.713272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.118 [2024-11-28 18:49:28.713476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.118 [2024-11-28 18:49:28.713490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.118 [2024-11-28 18:49:28.713724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:59.118 [2024-11-28 18:49:28.713856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.118 [2024-11-28 18:49:28.713868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.118 [2024-11-28 18:49:28.713982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.118 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.379 "name": "raid_bdev1", 00:08:59.379 "uuid": "b9c6562d-1e65-4c4b-8f1c-5d93a8ebe60e", 00:08:59.379 "strip_size_kb": 64, 00:08:59.379 "state": "online", 00:08:59.379 "raid_level": "raid0", 00:08:59.379 "superblock": true, 00:08:59.379 "num_base_bdevs": 3, 00:08:59.379 "num_base_bdevs_discovered": 3, 00:08:59.379 "num_base_bdevs_operational": 3, 00:08:59.379 "base_bdevs_list": [ 00:08:59.379 { 00:08:59.379 "name": "BaseBdev1", 00:08:59.379 "uuid": "2846840d-6698-5285-a77b-bc3e759548ee", 00:08:59.379 "is_configured": true, 00:08:59.379 "data_offset": 2048, 00:08:59.379 "data_size": 63488 00:08:59.379 }, 00:08:59.379 { 00:08:59.379 "name": "BaseBdev2", 00:08:59.379 "uuid": "998a92bf-ea25-5acb-8639-d9e095db9445", 00:08:59.379 "is_configured": true, 00:08:59.379 "data_offset": 2048, 00:08:59.379 "data_size": 63488 00:08:59.379 }, 00:08:59.379 { 00:08:59.379 "name": "BaseBdev3", 00:08:59.379 "uuid": "e584c365-2fc0-5154-8f6f-08d066751983", 00:08:59.379 "is_configured": true, 00:08:59.379 "data_offset": 2048, 00:08:59.379 "data_size": 63488 00:08:59.379 } 00:08:59.379 ] 00:08:59.379 }' 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.379 18:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.639 18:49:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.639 18:49:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.639 [2024-11-28 18:49:29.195840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.580 "name": "raid_bdev1", 00:09:00.580 "uuid": "b9c6562d-1e65-4c4b-8f1c-5d93a8ebe60e", 00:09:00.580 "strip_size_kb": 64, 00:09:00.580 "state": "online", 00:09:00.580 "raid_level": "raid0", 00:09:00.580 "superblock": true, 00:09:00.580 "num_base_bdevs": 3, 00:09:00.580 "num_base_bdevs_discovered": 3, 00:09:00.580 "num_base_bdevs_operational": 3, 00:09:00.580 "base_bdevs_list": [ 00:09:00.580 { 00:09:00.580 "name": "BaseBdev1", 00:09:00.580 "uuid": "2846840d-6698-5285-a77b-bc3e759548ee", 00:09:00.580 "is_configured": true, 00:09:00.580 "data_offset": 2048, 00:09:00.580 "data_size": 63488 00:09:00.580 }, 00:09:00.580 { 00:09:00.580 "name": "BaseBdev2", 00:09:00.580 "uuid": "998a92bf-ea25-5acb-8639-d9e095db9445", 00:09:00.580 "is_configured": true, 00:09:00.580 "data_offset": 2048, 00:09:00.580 "data_size": 63488 00:09:00.580 }, 00:09:00.580 { 00:09:00.580 "name": "BaseBdev3", 00:09:00.580 "uuid": "e584c365-2fc0-5154-8f6f-08d066751983", 00:09:00.580 "is_configured": true, 00:09:00.581 "data_offset": 2048, 00:09:00.581 "data_size": 63488 00:09:00.581 } 00:09:00.581 ] 00:09:00.581 }' 00:09:00.581 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.581 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.152 [2024-11-28 18:49:30.570146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.152 [2024-11-28 18:49:30.570233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.152 [2024-11-28 18:49:30.572793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.152 [2024-11-28 18:49:30.572888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.152 [2024-11-28 18:49:30.572944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.152 [2024-11-28 18:49:30.572984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:01.152 { 00:09:01.152 "results": [ 00:09:01.152 { 00:09:01.152 "job": "raid_bdev1", 00:09:01.152 "core_mask": "0x1", 00:09:01.152 "workload": "randrw", 00:09:01.152 "percentage": 50, 00:09:01.152 "status": "finished", 00:09:01.152 "queue_depth": 1, 00:09:01.152 "io_size": 131072, 00:09:01.152 "runtime": 1.372615, 00:09:01.152 "iops": 17045.566309562404, 00:09:01.152 "mibps": 2130.6957886953005, 00:09:01.152 "io_failed": 1, 00:09:01.152 "io_timeout": 0, 00:09:01.152 "avg_latency_us": 81.03319509608465, 00:09:01.152 "min_latency_us": 24.76771550597054, 00:09:01.152 "max_latency_us": 1349.5057962172057 00:09:01.152 } 00:09:01.152 ], 00:09:01.152 "core_count": 1 00:09:01.152 } 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78020 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78020 ']' 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78020 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78020 00:09:01.152 killing process with pid 78020 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78020' 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78020 00:09:01.152 [2024-11-28 18:49:30.618668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.152 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78020 00:09:01.152 [2024-11-28 18:49:30.643055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.suftdLctDW 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:01.413 ************************************ 00:09:01.413 END TEST raid_read_error_test 00:09:01.413 ************************************ 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:01.413 00:09:01.413 real 0m3.194s 00:09:01.413 user 0m4.007s 00:09:01.413 sys 0m0.532s 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.413 18:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 18:49:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:01.413 18:49:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.413 18:49:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.413 18:49:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 ************************************ 00:09:01.413 START TEST raid_write_error_test 00:09:01.413 ************************************ 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.413 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cDIvK1499Y 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78149 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78149 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78149 ']' 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.414 18:49:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 [2024-11-28 18:49:31.035717] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:01.674 [2024-11-28 18:49:31.035903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78149 ] 00:09:01.674 [2024-11-28 18:49:31.170314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.674 [2024-11-28 18:49:31.207887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.674 [2024-11-28 18:49:31.232848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.674 [2024-11-28 18:49:31.275018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.674 [2024-11-28 18:49:31.275169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.614 BaseBdev1_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.614 true 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.614 [2024-11-28 18:49:31.875313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.614 [2024-11-28 18:49:31.875376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.614 [2024-11-28 18:49:31.875393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.614 [2024-11-28 18:49:31.875404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.614 [2024-11-28 18:49:31.877411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.614 [2024-11-28 18:49:31.877454] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.614 BaseBdev1 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.614 BaseBdev2_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.614 true 00:09:02.614 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 [2024-11-28 18:49:31.915801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.615 [2024-11-28 18:49:31.915848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.615 [2024-11-28 18:49:31.915865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.615 [2024-11-28 18:49:31.915874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.615 [2024-11-28 18:49:31.917879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.615 [2024-11-28 18:49:31.917917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.615 BaseBdev2 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 BaseBdev3_malloc 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 true 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 [2024-11-28 18:49:31.956177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:02.615 [2024-11-28 18:49:31.956225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.615 [2024-11-28 18:49:31.956240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:02.615 [2024-11-28 18:49:31.956250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.615 [2024-11-28 18:49:31.958247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.615 [2024-11-28 18:49:31.958283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:02.615 BaseBdev3 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 [2024-11-28 18:49:31.968229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.615 [2024-11-28 18:49:31.969984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.615 [2024-11-28 18:49:31.970061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.615 [2024-11-28 18:49:31.970232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.615 [2024-11-28 18:49:31.970243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.615 [2024-11-28 18:49:31.970507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:02.615 [2024-11-28 18:49:31.970670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.615 [2024-11-28 18:49:31.970682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.615 [2024-11-28 18:49:31.970810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.615 18:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.615 18:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.615 "name": "raid_bdev1", 00:09:02.615 "uuid": "f9d440be-3834-4960-9b56-72e550d295e2", 00:09:02.615 "strip_size_kb": 64, 00:09:02.615 "state": "online", 00:09:02.615 "raid_level": "raid0", 00:09:02.615 "superblock": true, 00:09:02.615 "num_base_bdevs": 3, 00:09:02.615 "num_base_bdevs_discovered": 3, 00:09:02.615 "num_base_bdevs_operational": 3, 00:09:02.615 "base_bdevs_list": [ 00:09:02.615 { 00:09:02.615 "name": "BaseBdev1", 00:09:02.615 "uuid": "cba37f7e-65b4-5b1e-8d8d-fb3c17bfb725", 00:09:02.615 "is_configured": true, 00:09:02.615 "data_offset": 2048, 00:09:02.615 "data_size": 63488 00:09:02.615 }, 00:09:02.615 { 00:09:02.615 "name": "BaseBdev2", 00:09:02.615 "uuid": "63288951-3fa5-5870-9cbb-a7c1ed111c40", 00:09:02.615 "is_configured": true, 00:09:02.615 "data_offset": 2048, 00:09:02.615 "data_size": 63488 00:09:02.615 }, 00:09:02.615 { 00:09:02.615 "name": "BaseBdev3", 00:09:02.615 "uuid": "d93dd0fe-0a17-50b6-ae85-5c3a9128d526", 00:09:02.615 "is_configured": true, 00:09:02.615 "data_offset": 2048, 00:09:02.615 "data_size": 63488 00:09:02.615 } 00:09:02.615 ] 00:09:02.615 }' 00:09:02.615 18:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.615 18:49:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.875 18:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.875 18:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.135 [2024-11-28 18:49:32.512773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.093 "name": "raid_bdev1", 00:09:04.093 "uuid": "f9d440be-3834-4960-9b56-72e550d295e2", 00:09:04.093 "strip_size_kb": 64, 00:09:04.093 "state": "online", 00:09:04.093 "raid_level": "raid0", 00:09:04.093 "superblock": true, 00:09:04.093 "num_base_bdevs": 3, 00:09:04.093 "num_base_bdevs_discovered": 3, 00:09:04.093 "num_base_bdevs_operational": 3, 00:09:04.093 "base_bdevs_list": [ 00:09:04.093 { 00:09:04.093 "name": "BaseBdev1", 00:09:04.093 "uuid": "cba37f7e-65b4-5b1e-8d8d-fb3c17bfb725", 00:09:04.093 "is_configured": true, 00:09:04.093 "data_offset": 2048, 00:09:04.093 "data_size": 63488 00:09:04.093 }, 00:09:04.093 { 00:09:04.093 "name": "BaseBdev2", 00:09:04.093 "uuid": "63288951-3fa5-5870-9cbb-a7c1ed111c40", 00:09:04.093 "is_configured": true, 00:09:04.093 "data_offset": 2048, 00:09:04.093 "data_size": 63488 00:09:04.093 }, 00:09:04.093 { 00:09:04.093 "name": "BaseBdev3", 00:09:04.093 "uuid": "d93dd0fe-0a17-50b6-ae85-5c3a9128d526", 00:09:04.093 "is_configured": true, 00:09:04.093 "data_offset": 2048, 00:09:04.093 "data_size": 63488 00:09:04.093 } 00:09:04.093 ] 00:09:04.093 }' 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.093 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.354 [2024-11-28 18:49:33.850876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.354 [2024-11-28 18:49:33.850916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.354 [2024-11-28 18:49:33.853424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.354 [2024-11-28 18:49:33.853489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.354 [2024-11-28 18:49:33.853527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.354 [2024-11-28 18:49:33.853536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:04.354 { 00:09:04.354 "results": [ 00:09:04.354 { 00:09:04.354 "job": "raid_bdev1", 00:09:04.354 "core_mask": "0x1", 00:09:04.354 "workload": "randrw", 00:09:04.354 "percentage": 50, 00:09:04.354 "status": "finished", 00:09:04.354 "queue_depth": 1, 00:09:04.354 "io_size": 131072, 00:09:04.354 "runtime": 1.336322, 00:09:04.354 "iops": 16899.37006200601, 00:09:04.354 "mibps": 2112.4212577507515, 00:09:04.354 "io_failed": 1, 00:09:04.354 "io_timeout": 0, 00:09:04.354 "avg_latency_us": 81.65833325074172, 00:09:04.354 "min_latency_us": 17.962172056131788, 00:09:04.354 "max_latency_us": 1356.646038525233 00:09:04.354 } 00:09:04.354 ], 00:09:04.354 "core_count": 1 00:09:04.354 } 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78149 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78149 ']' 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78149 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78149 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78149' 00:09:04.354 killing process with pid 78149 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78149 00:09:04.354 [2024-11-28 18:49:33.899150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.354 18:49:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78149 00:09:04.354 [2024-11-28 18:49:33.923606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cDIvK1499Y 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:04.614 00:09:04.614 real 0m3.214s 00:09:04.614 user 0m4.074s 00:09:04.614 sys 0m0.520s 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.614 18:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.614 ************************************ 00:09:04.614 END TEST raid_write_error_test 00:09:04.614 ************************************ 00:09:04.614 18:49:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:04.614 18:49:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:04.614 18:49:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.614 18:49:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.614 18:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.875 ************************************ 00:09:04.875 START TEST raid_state_function_test 00:09:04.875 ************************************ 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78282 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.875 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78282' 00:09:04.875 Process raid pid: 78282 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78282 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78282 ']' 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.876 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.876 [2024-11-28 18:49:34.315378] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:04.876 [2024-11-28 18:49:34.315513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.876 [2024-11-28 18:49:34.454275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.135 [2024-11-28 18:49:34.490620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.135 [2024-11-28 18:49:34.515686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.135 [2024-11-28 18:49:34.557660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.135 [2024-11-28 18:49:34.557700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.704 [2024-11-28 18:49:35.141107] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.704 [2024-11-28 18:49:35.141162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.704 [2024-11-28 18:49:35.141175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.704 [2024-11-28 18:49:35.141182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.704 [2024-11-28 18:49:35.141193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.704 [2024-11-28 18:49:35.141200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.704 "name": "Existed_Raid", 00:09:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.704 "strip_size_kb": 64, 00:09:05.704 "state": "configuring", 00:09:05.704 "raid_level": "concat", 00:09:05.704 "superblock": false, 00:09:05.704 "num_base_bdevs": 3, 00:09:05.704 "num_base_bdevs_discovered": 0, 00:09:05.704 "num_base_bdevs_operational": 3, 00:09:05.704 "base_bdevs_list": [ 00:09:05.704 { 00:09:05.704 "name": "BaseBdev1", 00:09:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.704 "is_configured": false, 00:09:05.704 "data_offset": 0, 00:09:05.704 "data_size": 0 00:09:05.704 }, 00:09:05.704 { 00:09:05.704 "name": "BaseBdev2", 00:09:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.704 "is_configured": false, 00:09:05.704 "data_offset": 0, 00:09:05.704 "data_size": 0 00:09:05.704 }, 00:09:05.704 { 00:09:05.704 "name": "BaseBdev3", 00:09:05.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.704 "is_configured": false, 00:09:05.704 "data_offset": 0, 00:09:05.704 "data_size": 0 00:09:05.704 } 00:09:05.704 ] 00:09:05.704 }' 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.704 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 [2024-11-28 18:49:35.529099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.964 [2024-11-28 18:49:35.529174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 [2024-11-28 18:49:35.541143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.964 [2024-11-28 18:49:35.541183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.964 [2024-11-28 18:49:35.541193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.964 [2024-11-28 18:49:35.541200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.964 [2024-11-28 18:49:35.541208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.964 [2024-11-28 18:49:35.541216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 [2024-11-28 18:49:35.561801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.964 BaseBdev1 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.964 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.224 [ 00:09:06.224 { 00:09:06.224 "name": "BaseBdev1", 00:09:06.224 "aliases": [ 00:09:06.224 "ec1b4306-b33d-48cc-8402-46ba7dc9875a" 00:09:06.224 ], 00:09:06.224 "product_name": "Malloc disk", 00:09:06.224 "block_size": 512, 00:09:06.224 "num_blocks": 65536, 00:09:06.224 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:06.224 "assigned_rate_limits": { 00:09:06.224 "rw_ios_per_sec": 0, 00:09:06.224 "rw_mbytes_per_sec": 0, 00:09:06.224 "r_mbytes_per_sec": 0, 00:09:06.224 "w_mbytes_per_sec": 0 00:09:06.224 }, 00:09:06.224 "claimed": true, 00:09:06.224 "claim_type": "exclusive_write", 00:09:06.224 "zoned": false, 00:09:06.224 "supported_io_types": { 00:09:06.224 "read": true, 00:09:06.224 "write": true, 00:09:06.224 "unmap": true, 00:09:06.224 "flush": true, 00:09:06.224 "reset": true, 00:09:06.224 "nvme_admin": false, 00:09:06.224 "nvme_io": false, 00:09:06.224 "nvme_io_md": false, 00:09:06.224 "write_zeroes": true, 00:09:06.224 "zcopy": true, 00:09:06.224 "get_zone_info": false, 00:09:06.224 "zone_management": false, 00:09:06.224 "zone_append": false, 00:09:06.224 "compare": false, 00:09:06.224 "compare_and_write": false, 00:09:06.224 "abort": true, 00:09:06.224 "seek_hole": false, 00:09:06.224 "seek_data": false, 00:09:06.224 "copy": true, 00:09:06.224 "nvme_iov_md": false 00:09:06.224 }, 00:09:06.224 "memory_domains": [ 00:09:06.224 { 00:09:06.224 "dma_device_id": "system", 00:09:06.224 "dma_device_type": 1 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.224 "dma_device_type": 2 00:09:06.224 } 00:09:06.224 ], 00:09:06.224 "driver_specific": {} 00:09:06.224 } 00:09:06.224 ] 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.224 "name": "Existed_Raid", 00:09:06.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.224 "strip_size_kb": 64, 00:09:06.224 "state": "configuring", 00:09:06.224 "raid_level": "concat", 00:09:06.224 "superblock": false, 00:09:06.224 "num_base_bdevs": 3, 00:09:06.224 "num_base_bdevs_discovered": 1, 00:09:06.224 "num_base_bdevs_operational": 3, 00:09:06.224 "base_bdevs_list": [ 00:09:06.224 { 00:09:06.224 "name": "BaseBdev1", 00:09:06.224 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:06.224 "is_configured": true, 00:09:06.224 "data_offset": 0, 00:09:06.224 "data_size": 65536 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "name": "BaseBdev2", 00:09:06.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.224 "is_configured": false, 00:09:06.224 "data_offset": 0, 00:09:06.224 "data_size": 0 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "name": "BaseBdev3", 00:09:06.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.224 "is_configured": false, 00:09:06.224 "data_offset": 0, 00:09:06.224 "data_size": 0 00:09:06.224 } 00:09:06.224 ] 00:09:06.224 }' 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.224 18:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 [2024-11-28 18:49:36.009949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.485 [2024-11-28 18:49:36.010043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 [2024-11-28 18:49:36.021976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.485 [2024-11-28 18:49:36.023781] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.485 [2024-11-28 18:49:36.023820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.485 [2024-11-28 18:49:36.023833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.485 [2024-11-28 18:49:36.023840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:06.485 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.486 "name": "Existed_Raid", 00:09:06.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.486 "strip_size_kb": 64, 00:09:06.486 "state": "configuring", 00:09:06.486 "raid_level": "concat", 00:09:06.486 "superblock": false, 00:09:06.486 "num_base_bdevs": 3, 00:09:06.486 "num_base_bdevs_discovered": 1, 00:09:06.486 "num_base_bdevs_operational": 3, 00:09:06.486 "base_bdevs_list": [ 00:09:06.486 { 00:09:06.486 "name": "BaseBdev1", 00:09:06.486 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:06.486 "is_configured": true, 00:09:06.486 "data_offset": 0, 00:09:06.486 "data_size": 65536 00:09:06.486 }, 00:09:06.486 { 00:09:06.486 "name": "BaseBdev2", 00:09:06.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.486 "is_configured": false, 00:09:06.486 "data_offset": 0, 00:09:06.486 "data_size": 0 00:09:06.486 }, 00:09:06.486 { 00:09:06.486 "name": "BaseBdev3", 00:09:06.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.486 "is_configured": false, 00:09:06.486 "data_offset": 0, 00:09:06.486 "data_size": 0 00:09:06.486 } 00:09:06.486 ] 00:09:06.486 }' 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.486 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.056 [2024-11-28 18:49:36.409027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.056 BaseBdev2 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.056 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.057 [ 00:09:07.057 { 00:09:07.057 "name": "BaseBdev2", 00:09:07.057 "aliases": [ 00:09:07.057 "47011c77-5028-4688-894a-61fe9b458725" 00:09:07.057 ], 00:09:07.057 "product_name": "Malloc disk", 00:09:07.057 "block_size": 512, 00:09:07.057 "num_blocks": 65536, 00:09:07.057 "uuid": "47011c77-5028-4688-894a-61fe9b458725", 00:09:07.057 "assigned_rate_limits": { 00:09:07.057 "rw_ios_per_sec": 0, 00:09:07.057 "rw_mbytes_per_sec": 0, 00:09:07.057 "r_mbytes_per_sec": 0, 00:09:07.057 "w_mbytes_per_sec": 0 00:09:07.057 }, 00:09:07.057 "claimed": true, 00:09:07.057 "claim_type": "exclusive_write", 00:09:07.057 "zoned": false, 00:09:07.057 "supported_io_types": { 00:09:07.057 "read": true, 00:09:07.057 "write": true, 00:09:07.057 "unmap": true, 00:09:07.057 "flush": true, 00:09:07.057 "reset": true, 00:09:07.057 "nvme_admin": false, 00:09:07.057 "nvme_io": false, 00:09:07.057 "nvme_io_md": false, 00:09:07.057 "write_zeroes": true, 00:09:07.057 "zcopy": true, 00:09:07.057 "get_zone_info": false, 00:09:07.057 "zone_management": false, 00:09:07.057 "zone_append": false, 00:09:07.057 "compare": false, 00:09:07.057 "compare_and_write": false, 00:09:07.057 "abort": true, 00:09:07.057 "seek_hole": false, 00:09:07.057 "seek_data": false, 00:09:07.057 "copy": true, 00:09:07.057 "nvme_iov_md": false 00:09:07.057 }, 00:09:07.057 "memory_domains": [ 00:09:07.057 { 00:09:07.057 "dma_device_id": "system", 00:09:07.057 "dma_device_type": 1 00:09:07.057 }, 00:09:07.057 { 00:09:07.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.057 "dma_device_type": 2 00:09:07.057 } 00:09:07.057 ], 00:09:07.057 "driver_specific": {} 00:09:07.057 } 00:09:07.057 ] 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.057 "name": "Existed_Raid", 00:09:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.057 "strip_size_kb": 64, 00:09:07.057 "state": "configuring", 00:09:07.057 "raid_level": "concat", 00:09:07.057 "superblock": false, 00:09:07.057 "num_base_bdevs": 3, 00:09:07.057 "num_base_bdevs_discovered": 2, 00:09:07.057 "num_base_bdevs_operational": 3, 00:09:07.057 "base_bdevs_list": [ 00:09:07.057 { 00:09:07.057 "name": "BaseBdev1", 00:09:07.057 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:07.057 "is_configured": true, 00:09:07.057 "data_offset": 0, 00:09:07.057 "data_size": 65536 00:09:07.057 }, 00:09:07.057 { 00:09:07.057 "name": "BaseBdev2", 00:09:07.057 "uuid": "47011c77-5028-4688-894a-61fe9b458725", 00:09:07.057 "is_configured": true, 00:09:07.057 "data_offset": 0, 00:09:07.057 "data_size": 65536 00:09:07.057 }, 00:09:07.057 { 00:09:07.057 "name": "BaseBdev3", 00:09:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.057 "is_configured": false, 00:09:07.057 "data_offset": 0, 00:09:07.057 "data_size": 0 00:09:07.057 } 00:09:07.057 ] 00:09:07.057 }' 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.057 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.318 [2024-11-28 18:49:36.910397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.318 [2024-11-28 18:49:36.910529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:07.318 [2024-11-28 18:49:36.910574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:07.318 [2024-11-28 18:49:36.910947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.318 [2024-11-28 18:49:36.911162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:07.318 [2024-11-28 18:49:36.911217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:07.318 [2024-11-28 18:49:36.911494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.318 BaseBdev3 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.318 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.579 [ 00:09:07.579 { 00:09:07.579 "name": "BaseBdev3", 00:09:07.579 "aliases": [ 00:09:07.579 "2ab19b92-2eef-454c-ba0f-b116787a5195" 00:09:07.579 ], 00:09:07.579 "product_name": "Malloc disk", 00:09:07.579 "block_size": 512, 00:09:07.579 "num_blocks": 65536, 00:09:07.579 "uuid": "2ab19b92-2eef-454c-ba0f-b116787a5195", 00:09:07.579 "assigned_rate_limits": { 00:09:07.579 "rw_ios_per_sec": 0, 00:09:07.579 "rw_mbytes_per_sec": 0, 00:09:07.579 "r_mbytes_per_sec": 0, 00:09:07.579 "w_mbytes_per_sec": 0 00:09:07.579 }, 00:09:07.579 "claimed": true, 00:09:07.579 "claim_type": "exclusive_write", 00:09:07.579 "zoned": false, 00:09:07.579 "supported_io_types": { 00:09:07.579 "read": true, 00:09:07.579 "write": true, 00:09:07.579 "unmap": true, 00:09:07.579 "flush": true, 00:09:07.579 "reset": true, 00:09:07.579 "nvme_admin": false, 00:09:07.579 "nvme_io": false, 00:09:07.579 "nvme_io_md": false, 00:09:07.579 "write_zeroes": true, 00:09:07.579 "zcopy": true, 00:09:07.579 "get_zone_info": false, 00:09:07.579 "zone_management": false, 00:09:07.579 "zone_append": false, 00:09:07.579 "compare": false, 00:09:07.579 "compare_and_write": false, 00:09:07.579 "abort": true, 00:09:07.579 "seek_hole": false, 00:09:07.579 "seek_data": false, 00:09:07.579 "copy": true, 00:09:07.579 "nvme_iov_md": false 00:09:07.579 }, 00:09:07.579 "memory_domains": [ 00:09:07.579 { 00:09:07.579 "dma_device_id": "system", 00:09:07.579 "dma_device_type": 1 00:09:07.579 }, 00:09:07.579 { 00:09:07.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.579 "dma_device_type": 2 00:09:07.579 } 00:09:07.579 ], 00:09:07.579 "driver_specific": {} 00:09:07.579 } 00:09:07.579 ] 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.579 "name": "Existed_Raid", 00:09:07.579 "uuid": "83ea0723-405c-43b2-b774-b34959579190", 00:09:07.579 "strip_size_kb": 64, 00:09:07.579 "state": "online", 00:09:07.579 "raid_level": "concat", 00:09:07.579 "superblock": false, 00:09:07.579 "num_base_bdevs": 3, 00:09:07.579 "num_base_bdevs_discovered": 3, 00:09:07.579 "num_base_bdevs_operational": 3, 00:09:07.579 "base_bdevs_list": [ 00:09:07.579 { 00:09:07.579 "name": "BaseBdev1", 00:09:07.579 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:07.579 "is_configured": true, 00:09:07.579 "data_offset": 0, 00:09:07.579 "data_size": 65536 00:09:07.579 }, 00:09:07.579 { 00:09:07.579 "name": "BaseBdev2", 00:09:07.579 "uuid": "47011c77-5028-4688-894a-61fe9b458725", 00:09:07.579 "is_configured": true, 00:09:07.579 "data_offset": 0, 00:09:07.579 "data_size": 65536 00:09:07.579 }, 00:09:07.579 { 00:09:07.579 "name": "BaseBdev3", 00:09:07.579 "uuid": "2ab19b92-2eef-454c-ba0f-b116787a5195", 00:09:07.579 "is_configured": true, 00:09:07.579 "data_offset": 0, 00:09:07.579 "data_size": 65536 00:09:07.579 } 00:09:07.579 ] 00:09:07.579 }' 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.579 18:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.840 [2024-11-28 18:49:37.382838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.840 "name": "Existed_Raid", 00:09:07.840 "aliases": [ 00:09:07.840 "83ea0723-405c-43b2-b774-b34959579190" 00:09:07.840 ], 00:09:07.840 "product_name": "Raid Volume", 00:09:07.840 "block_size": 512, 00:09:07.840 "num_blocks": 196608, 00:09:07.840 "uuid": "83ea0723-405c-43b2-b774-b34959579190", 00:09:07.840 "assigned_rate_limits": { 00:09:07.840 "rw_ios_per_sec": 0, 00:09:07.840 "rw_mbytes_per_sec": 0, 00:09:07.840 "r_mbytes_per_sec": 0, 00:09:07.840 "w_mbytes_per_sec": 0 00:09:07.840 }, 00:09:07.840 "claimed": false, 00:09:07.840 "zoned": false, 00:09:07.840 "supported_io_types": { 00:09:07.840 "read": true, 00:09:07.840 "write": true, 00:09:07.840 "unmap": true, 00:09:07.840 "flush": true, 00:09:07.840 "reset": true, 00:09:07.840 "nvme_admin": false, 00:09:07.840 "nvme_io": false, 00:09:07.840 "nvme_io_md": false, 00:09:07.840 "write_zeroes": true, 00:09:07.840 "zcopy": false, 00:09:07.840 "get_zone_info": false, 00:09:07.840 "zone_management": false, 00:09:07.840 "zone_append": false, 00:09:07.840 "compare": false, 00:09:07.840 "compare_and_write": false, 00:09:07.840 "abort": false, 00:09:07.840 "seek_hole": false, 00:09:07.840 "seek_data": false, 00:09:07.840 "copy": false, 00:09:07.840 "nvme_iov_md": false 00:09:07.840 }, 00:09:07.840 "memory_domains": [ 00:09:07.840 { 00:09:07.840 "dma_device_id": "system", 00:09:07.840 "dma_device_type": 1 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.840 "dma_device_type": 2 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "dma_device_id": "system", 00:09:07.840 "dma_device_type": 1 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.840 "dma_device_type": 2 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "dma_device_id": "system", 00:09:07.840 "dma_device_type": 1 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.840 "dma_device_type": 2 00:09:07.840 } 00:09:07.840 ], 00:09:07.840 "driver_specific": { 00:09:07.840 "raid": { 00:09:07.840 "uuid": "83ea0723-405c-43b2-b774-b34959579190", 00:09:07.840 "strip_size_kb": 64, 00:09:07.840 "state": "online", 00:09:07.840 "raid_level": "concat", 00:09:07.840 "superblock": false, 00:09:07.840 "num_base_bdevs": 3, 00:09:07.840 "num_base_bdevs_discovered": 3, 00:09:07.840 "num_base_bdevs_operational": 3, 00:09:07.840 "base_bdevs_list": [ 00:09:07.840 { 00:09:07.840 "name": "BaseBdev1", 00:09:07.840 "uuid": "ec1b4306-b33d-48cc-8402-46ba7dc9875a", 00:09:07.840 "is_configured": true, 00:09:07.840 "data_offset": 0, 00:09:07.840 "data_size": 65536 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "name": "BaseBdev2", 00:09:07.840 "uuid": "47011c77-5028-4688-894a-61fe9b458725", 00:09:07.840 "is_configured": true, 00:09:07.840 "data_offset": 0, 00:09:07.840 "data_size": 65536 00:09:07.840 }, 00:09:07.840 { 00:09:07.840 "name": "BaseBdev3", 00:09:07.840 "uuid": "2ab19b92-2eef-454c-ba0f-b116787a5195", 00:09:07.840 "is_configured": true, 00:09:07.840 "data_offset": 0, 00:09:07.840 "data_size": 65536 00:09:07.840 } 00:09:07.840 ] 00:09:07.840 } 00:09:07.840 } 00:09:07.840 }' 00:09:07.840 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.101 BaseBdev2 00:09:08.101 BaseBdev3' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.101 [2024-11-28 18:49:37.642677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.101 [2024-11-28 18:49:37.642704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.101 [2024-11-28 18:49:37.642757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.101 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.102 "name": "Existed_Raid", 00:09:08.102 "uuid": "83ea0723-405c-43b2-b774-b34959579190", 00:09:08.102 "strip_size_kb": 64, 00:09:08.102 "state": "offline", 00:09:08.102 "raid_level": "concat", 00:09:08.102 "superblock": false, 00:09:08.102 "num_base_bdevs": 3, 00:09:08.102 "num_base_bdevs_discovered": 2, 00:09:08.102 "num_base_bdevs_operational": 2, 00:09:08.102 "base_bdevs_list": [ 00:09:08.102 { 00:09:08.102 "name": null, 00:09:08.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.102 "is_configured": false, 00:09:08.102 "data_offset": 0, 00:09:08.102 "data_size": 65536 00:09:08.102 }, 00:09:08.102 { 00:09:08.102 "name": "BaseBdev2", 00:09:08.102 "uuid": "47011c77-5028-4688-894a-61fe9b458725", 00:09:08.102 "is_configured": true, 00:09:08.102 "data_offset": 0, 00:09:08.102 "data_size": 65536 00:09:08.102 }, 00:09:08.102 { 00:09:08.102 "name": "BaseBdev3", 00:09:08.102 "uuid": "2ab19b92-2eef-454c-ba0f-b116787a5195", 00:09:08.102 "is_configured": true, 00:09:08.102 "data_offset": 0, 00:09:08.102 "data_size": 65536 00:09:08.102 } 00:09:08.102 ] 00:09:08.102 }' 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.102 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 [2024-11-28 18:49:38.154067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 [2024-11-28 18:49:38.205179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.672 [2024-11-28 18:49:38.205271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.672 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 BaseBdev2 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 [ 00:09:08.933 { 00:09:08.933 "name": "BaseBdev2", 00:09:08.933 "aliases": [ 00:09:08.933 "9e205f3e-f14e-4d39-a992-82b41bcd1aba" 00:09:08.933 ], 00:09:08.933 "product_name": "Malloc disk", 00:09:08.933 "block_size": 512, 00:09:08.933 "num_blocks": 65536, 00:09:08.933 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:08.933 "assigned_rate_limits": { 00:09:08.933 "rw_ios_per_sec": 0, 00:09:08.933 "rw_mbytes_per_sec": 0, 00:09:08.933 "r_mbytes_per_sec": 0, 00:09:08.933 "w_mbytes_per_sec": 0 00:09:08.933 }, 00:09:08.933 "claimed": false, 00:09:08.933 "zoned": false, 00:09:08.933 "supported_io_types": { 00:09:08.933 "read": true, 00:09:08.933 "write": true, 00:09:08.933 "unmap": true, 00:09:08.933 "flush": true, 00:09:08.933 "reset": true, 00:09:08.933 "nvme_admin": false, 00:09:08.933 "nvme_io": false, 00:09:08.933 "nvme_io_md": false, 00:09:08.933 "write_zeroes": true, 00:09:08.933 "zcopy": true, 00:09:08.933 "get_zone_info": false, 00:09:08.933 "zone_management": false, 00:09:08.933 "zone_append": false, 00:09:08.933 "compare": false, 00:09:08.933 "compare_and_write": false, 00:09:08.933 "abort": true, 00:09:08.933 "seek_hole": false, 00:09:08.933 "seek_data": false, 00:09:08.933 "copy": true, 00:09:08.933 "nvme_iov_md": false 00:09:08.933 }, 00:09:08.933 "memory_domains": [ 00:09:08.933 { 00:09:08.933 "dma_device_id": "system", 00:09:08.933 "dma_device_type": 1 00:09:08.933 }, 00:09:08.933 { 00:09:08.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.933 "dma_device_type": 2 00:09:08.933 } 00:09:08.933 ], 00:09:08.933 "driver_specific": {} 00:09:08.933 } 00:09:08.933 ] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 BaseBdev3 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.933 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.933 [ 00:09:08.933 { 00:09:08.933 "name": "BaseBdev3", 00:09:08.933 "aliases": [ 00:09:08.933 "0a369f7f-9316-400c-85f2-c2e0702ffa1b" 00:09:08.933 ], 00:09:08.933 "product_name": "Malloc disk", 00:09:08.933 "block_size": 512, 00:09:08.934 "num_blocks": 65536, 00:09:08.934 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:08.934 "assigned_rate_limits": { 00:09:08.934 "rw_ios_per_sec": 0, 00:09:08.934 "rw_mbytes_per_sec": 0, 00:09:08.934 "r_mbytes_per_sec": 0, 00:09:08.934 "w_mbytes_per_sec": 0 00:09:08.934 }, 00:09:08.934 "claimed": false, 00:09:08.934 "zoned": false, 00:09:08.934 "supported_io_types": { 00:09:08.934 "read": true, 00:09:08.934 "write": true, 00:09:08.934 "unmap": true, 00:09:08.934 "flush": true, 00:09:08.934 "reset": true, 00:09:08.934 "nvme_admin": false, 00:09:08.934 "nvme_io": false, 00:09:08.934 "nvme_io_md": false, 00:09:08.934 "write_zeroes": true, 00:09:08.934 "zcopy": true, 00:09:08.934 "get_zone_info": false, 00:09:08.934 "zone_management": false, 00:09:08.934 "zone_append": false, 00:09:08.934 "compare": false, 00:09:08.934 "compare_and_write": false, 00:09:08.934 "abort": true, 00:09:08.934 "seek_hole": false, 00:09:08.934 "seek_data": false, 00:09:08.934 "copy": true, 00:09:08.934 "nvme_iov_md": false 00:09:08.934 }, 00:09:08.934 "memory_domains": [ 00:09:08.934 { 00:09:08.934 "dma_device_id": "system", 00:09:08.934 "dma_device_type": 1 00:09:08.934 }, 00:09:08.934 { 00:09:08.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.934 "dma_device_type": 2 00:09:08.934 } 00:09:08.934 ], 00:09:08.934 "driver_specific": {} 00:09:08.934 } 00:09:08.934 ] 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 [2024-11-28 18:49:38.376757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.934 [2024-11-28 18:49:38.376859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.934 [2024-11-28 18:49:38.376898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.934 [2024-11-28 18:49:38.378699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.934 "name": "Existed_Raid", 00:09:08.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.934 "strip_size_kb": 64, 00:09:08.934 "state": "configuring", 00:09:08.934 "raid_level": "concat", 00:09:08.934 "superblock": false, 00:09:08.934 "num_base_bdevs": 3, 00:09:08.934 "num_base_bdevs_discovered": 2, 00:09:08.934 "num_base_bdevs_operational": 3, 00:09:08.934 "base_bdevs_list": [ 00:09:08.934 { 00:09:08.934 "name": "BaseBdev1", 00:09:08.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.934 "is_configured": false, 00:09:08.934 "data_offset": 0, 00:09:08.934 "data_size": 0 00:09:08.934 }, 00:09:08.934 { 00:09:08.934 "name": "BaseBdev2", 00:09:08.934 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:08.934 "is_configured": true, 00:09:08.934 "data_offset": 0, 00:09:08.934 "data_size": 65536 00:09:08.934 }, 00:09:08.934 { 00:09:08.934 "name": "BaseBdev3", 00:09:08.934 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:08.934 "is_configured": true, 00:09:08.934 "data_offset": 0, 00:09:08.934 "data_size": 65536 00:09:08.934 } 00:09:08.934 ] 00:09:08.934 }' 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.934 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.504 [2024-11-28 18:49:38.824878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.504 "name": "Existed_Raid", 00:09:09.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.504 "strip_size_kb": 64, 00:09:09.504 "state": "configuring", 00:09:09.504 "raid_level": "concat", 00:09:09.504 "superblock": false, 00:09:09.504 "num_base_bdevs": 3, 00:09:09.504 "num_base_bdevs_discovered": 1, 00:09:09.504 "num_base_bdevs_operational": 3, 00:09:09.504 "base_bdevs_list": [ 00:09:09.504 { 00:09:09.504 "name": "BaseBdev1", 00:09:09.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.504 "is_configured": false, 00:09:09.504 "data_offset": 0, 00:09:09.504 "data_size": 0 00:09:09.504 }, 00:09:09.504 { 00:09:09.504 "name": null, 00:09:09.504 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:09.504 "is_configured": false, 00:09:09.504 "data_offset": 0, 00:09:09.504 "data_size": 65536 00:09:09.504 }, 00:09:09.504 { 00:09:09.504 "name": "BaseBdev3", 00:09:09.504 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:09.504 "is_configured": true, 00:09:09.504 "data_offset": 0, 00:09:09.504 "data_size": 65536 00:09:09.504 } 00:09:09.504 ] 00:09:09.504 }' 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.504 18:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 [2024-11-28 18:49:39.255838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.766 BaseBdev1 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 [ 00:09:09.766 { 00:09:09.766 "name": "BaseBdev1", 00:09:09.766 "aliases": [ 00:09:09.766 "c5bacf31-b4f9-4390-af5b-0f78f3f48d30" 00:09:09.766 ], 00:09:09.766 "product_name": "Malloc disk", 00:09:09.766 "block_size": 512, 00:09:09.766 "num_blocks": 65536, 00:09:09.766 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:09.766 "assigned_rate_limits": { 00:09:09.766 "rw_ios_per_sec": 0, 00:09:09.766 "rw_mbytes_per_sec": 0, 00:09:09.766 "r_mbytes_per_sec": 0, 00:09:09.766 "w_mbytes_per_sec": 0 00:09:09.766 }, 00:09:09.766 "claimed": true, 00:09:09.766 "claim_type": "exclusive_write", 00:09:09.766 "zoned": false, 00:09:09.766 "supported_io_types": { 00:09:09.766 "read": true, 00:09:09.766 "write": true, 00:09:09.766 "unmap": true, 00:09:09.766 "flush": true, 00:09:09.766 "reset": true, 00:09:09.766 "nvme_admin": false, 00:09:09.766 "nvme_io": false, 00:09:09.766 "nvme_io_md": false, 00:09:09.766 "write_zeroes": true, 00:09:09.766 "zcopy": true, 00:09:09.766 "get_zone_info": false, 00:09:09.766 "zone_management": false, 00:09:09.766 "zone_append": false, 00:09:09.766 "compare": false, 00:09:09.766 "compare_and_write": false, 00:09:09.766 "abort": true, 00:09:09.766 "seek_hole": false, 00:09:09.766 "seek_data": false, 00:09:09.766 "copy": true, 00:09:09.766 "nvme_iov_md": false 00:09:09.766 }, 00:09:09.766 "memory_domains": [ 00:09:09.766 { 00:09:09.766 "dma_device_id": "system", 00:09:09.766 "dma_device_type": 1 00:09:09.766 }, 00:09:09.766 { 00:09:09.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.766 "dma_device_type": 2 00:09:09.766 } 00:09:09.766 ], 00:09:09.766 "driver_specific": {} 00:09:09.766 } 00:09:09.766 ] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.766 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.766 "name": "Existed_Raid", 00:09:09.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.766 "strip_size_kb": 64, 00:09:09.766 "state": "configuring", 00:09:09.766 "raid_level": "concat", 00:09:09.766 "superblock": false, 00:09:09.766 "num_base_bdevs": 3, 00:09:09.766 "num_base_bdevs_discovered": 2, 00:09:09.766 "num_base_bdevs_operational": 3, 00:09:09.766 "base_bdevs_list": [ 00:09:09.766 { 00:09:09.766 "name": "BaseBdev1", 00:09:09.766 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:09.766 "is_configured": true, 00:09:09.766 "data_offset": 0, 00:09:09.766 "data_size": 65536 00:09:09.766 }, 00:09:09.766 { 00:09:09.766 "name": null, 00:09:09.766 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:09.766 "is_configured": false, 00:09:09.766 "data_offset": 0, 00:09:09.766 "data_size": 65536 00:09:09.766 }, 00:09:09.766 { 00:09:09.766 "name": "BaseBdev3", 00:09:09.766 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:09.766 "is_configured": true, 00:09:09.766 "data_offset": 0, 00:09:09.766 "data_size": 65536 00:09:09.766 } 00:09:09.766 ] 00:09:09.766 }' 00:09:09.767 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.767 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 [2024-11-28 18:49:39.760017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.337 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.337 "name": "Existed_Raid", 00:09:10.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.337 "strip_size_kb": 64, 00:09:10.337 "state": "configuring", 00:09:10.337 "raid_level": "concat", 00:09:10.337 "superblock": false, 00:09:10.337 "num_base_bdevs": 3, 00:09:10.337 "num_base_bdevs_discovered": 1, 00:09:10.337 "num_base_bdevs_operational": 3, 00:09:10.337 "base_bdevs_list": [ 00:09:10.337 { 00:09:10.337 "name": "BaseBdev1", 00:09:10.337 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:10.337 "is_configured": true, 00:09:10.337 "data_offset": 0, 00:09:10.337 "data_size": 65536 00:09:10.337 }, 00:09:10.337 { 00:09:10.337 "name": null, 00:09:10.337 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:10.337 "is_configured": false, 00:09:10.337 "data_offset": 0, 00:09:10.337 "data_size": 65536 00:09:10.338 }, 00:09:10.338 { 00:09:10.338 "name": null, 00:09:10.338 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:10.338 "is_configured": false, 00:09:10.338 "data_offset": 0, 00:09:10.338 "data_size": 65536 00:09:10.338 } 00:09:10.338 ] 00:09:10.338 }' 00:09:10.338 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.338 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.598 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.598 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.598 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.598 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.598 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.859 [2024-11-28 18:49:40.232174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.859 "name": "Existed_Raid", 00:09:10.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.859 "strip_size_kb": 64, 00:09:10.859 "state": "configuring", 00:09:10.859 "raid_level": "concat", 00:09:10.859 "superblock": false, 00:09:10.859 "num_base_bdevs": 3, 00:09:10.859 "num_base_bdevs_discovered": 2, 00:09:10.859 "num_base_bdevs_operational": 3, 00:09:10.859 "base_bdevs_list": [ 00:09:10.859 { 00:09:10.859 "name": "BaseBdev1", 00:09:10.859 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:10.859 "is_configured": true, 00:09:10.859 "data_offset": 0, 00:09:10.859 "data_size": 65536 00:09:10.859 }, 00:09:10.859 { 00:09:10.859 "name": null, 00:09:10.859 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:10.859 "is_configured": false, 00:09:10.859 "data_offset": 0, 00:09:10.859 "data_size": 65536 00:09:10.859 }, 00:09:10.859 { 00:09:10.859 "name": "BaseBdev3", 00:09:10.859 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:10.859 "is_configured": true, 00:09:10.859 "data_offset": 0, 00:09:10.859 "data_size": 65536 00:09:10.859 } 00:09:10.859 ] 00:09:10.859 }' 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.859 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.119 [2024-11-28 18:49:40.644288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.119 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.120 "name": "Existed_Raid", 00:09:11.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.120 "strip_size_kb": 64, 00:09:11.120 "state": "configuring", 00:09:11.120 "raid_level": "concat", 00:09:11.120 "superblock": false, 00:09:11.120 "num_base_bdevs": 3, 00:09:11.120 "num_base_bdevs_discovered": 1, 00:09:11.120 "num_base_bdevs_operational": 3, 00:09:11.120 "base_bdevs_list": [ 00:09:11.120 { 00:09:11.120 "name": null, 00:09:11.120 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:11.120 "is_configured": false, 00:09:11.120 "data_offset": 0, 00:09:11.120 "data_size": 65536 00:09:11.120 }, 00:09:11.120 { 00:09:11.120 "name": null, 00:09:11.120 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:11.120 "is_configured": false, 00:09:11.120 "data_offset": 0, 00:09:11.120 "data_size": 65536 00:09:11.120 }, 00:09:11.120 { 00:09:11.120 "name": "BaseBdev3", 00:09:11.120 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:11.120 "is_configured": true, 00:09:11.120 "data_offset": 0, 00:09:11.120 "data_size": 65536 00:09:11.120 } 00:09:11.120 ] 00:09:11.120 }' 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.120 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.688 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.688 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.688 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.688 18:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.688 18:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.688 [2024-11-28 18:49:41.038869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.688 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.688 "name": "Existed_Raid", 00:09:11.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.688 "strip_size_kb": 64, 00:09:11.688 "state": "configuring", 00:09:11.689 "raid_level": "concat", 00:09:11.689 "superblock": false, 00:09:11.689 "num_base_bdevs": 3, 00:09:11.689 "num_base_bdevs_discovered": 2, 00:09:11.689 "num_base_bdevs_operational": 3, 00:09:11.689 "base_bdevs_list": [ 00:09:11.689 { 00:09:11.689 "name": null, 00:09:11.689 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:11.689 "is_configured": false, 00:09:11.689 "data_offset": 0, 00:09:11.689 "data_size": 65536 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "name": "BaseBdev2", 00:09:11.689 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:11.689 "is_configured": true, 00:09:11.689 "data_offset": 0, 00:09:11.689 "data_size": 65536 00:09:11.689 }, 00:09:11.689 { 00:09:11.689 "name": "BaseBdev3", 00:09:11.689 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:11.689 "is_configured": true, 00:09:11.689 "data_offset": 0, 00:09:11.689 "data_size": 65536 00:09:11.689 } 00:09:11.689 ] 00:09:11.689 }' 00:09:11.689 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.689 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c5bacf31-b4f9-4390-af5b-0f78f3f48d30 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 [2024-11-28 18:49:41.501839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.948 [2024-11-28 18:49:41.501882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.948 [2024-11-28 18:49:41.501889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:11.948 [2024-11-28 18:49:41.502129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:11.948 [2024-11-28 18:49:41.502239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.948 [2024-11-28 18:49:41.502252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.948 [2024-11-28 18:49:41.502440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.948 NewBaseBdev 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.948 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 [ 00:09:11.948 { 00:09:11.948 "name": "NewBaseBdev", 00:09:11.948 "aliases": [ 00:09:11.948 "c5bacf31-b4f9-4390-af5b-0f78f3f48d30" 00:09:11.948 ], 00:09:11.948 "product_name": "Malloc disk", 00:09:11.948 "block_size": 512, 00:09:11.948 "num_blocks": 65536, 00:09:11.948 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:11.948 "assigned_rate_limits": { 00:09:11.948 "rw_ios_per_sec": 0, 00:09:11.948 "rw_mbytes_per_sec": 0, 00:09:11.948 "r_mbytes_per_sec": 0, 00:09:11.948 "w_mbytes_per_sec": 0 00:09:11.948 }, 00:09:11.948 "claimed": true, 00:09:11.948 "claim_type": "exclusive_write", 00:09:11.948 "zoned": false, 00:09:11.948 "supported_io_types": { 00:09:11.948 "read": true, 00:09:11.948 "write": true, 00:09:11.948 "unmap": true, 00:09:11.948 "flush": true, 00:09:11.948 "reset": true, 00:09:11.948 "nvme_admin": false, 00:09:11.948 "nvme_io": false, 00:09:11.948 "nvme_io_md": false, 00:09:11.948 "write_zeroes": true, 00:09:11.948 "zcopy": true, 00:09:11.948 "get_zone_info": false, 00:09:11.948 "zone_management": false, 00:09:11.948 "zone_append": false, 00:09:11.948 "compare": false, 00:09:11.948 "compare_and_write": false, 00:09:11.948 "abort": true, 00:09:11.948 "seek_hole": false, 00:09:11.948 "seek_data": false, 00:09:11.948 "copy": true, 00:09:11.948 "nvme_iov_md": false 00:09:11.948 }, 00:09:11.949 "memory_domains": [ 00:09:11.949 { 00:09:11.949 "dma_device_id": "system", 00:09:11.949 "dma_device_type": 1 00:09:11.949 }, 00:09:11.949 { 00:09:11.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.949 "dma_device_type": 2 00:09:11.949 } 00:09:11.949 ], 00:09:11.949 "driver_specific": {} 00:09:11.949 } 00:09:11.949 ] 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.949 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.208 "name": "Existed_Raid", 00:09:12.208 "uuid": "fd4f2b9f-3771-47f9-89d3-dabac5613e81", 00:09:12.208 "strip_size_kb": 64, 00:09:12.208 "state": "online", 00:09:12.208 "raid_level": "concat", 00:09:12.208 "superblock": false, 00:09:12.208 "num_base_bdevs": 3, 00:09:12.208 "num_base_bdevs_discovered": 3, 00:09:12.208 "num_base_bdevs_operational": 3, 00:09:12.208 "base_bdevs_list": [ 00:09:12.208 { 00:09:12.208 "name": "NewBaseBdev", 00:09:12.208 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:12.208 "is_configured": true, 00:09:12.208 "data_offset": 0, 00:09:12.208 "data_size": 65536 00:09:12.208 }, 00:09:12.208 { 00:09:12.208 "name": "BaseBdev2", 00:09:12.208 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:12.208 "is_configured": true, 00:09:12.208 "data_offset": 0, 00:09:12.208 "data_size": 65536 00:09:12.208 }, 00:09:12.208 { 00:09:12.208 "name": "BaseBdev3", 00:09:12.208 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:12.208 "is_configured": true, 00:09:12.208 "data_offset": 0, 00:09:12.208 "data_size": 65536 00:09:12.208 } 00:09:12.208 ] 00:09:12.208 }' 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.208 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.467 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.468 [2024-11-28 18:49:41.942299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.468 "name": "Existed_Raid", 00:09:12.468 "aliases": [ 00:09:12.468 "fd4f2b9f-3771-47f9-89d3-dabac5613e81" 00:09:12.468 ], 00:09:12.468 "product_name": "Raid Volume", 00:09:12.468 "block_size": 512, 00:09:12.468 "num_blocks": 196608, 00:09:12.468 "uuid": "fd4f2b9f-3771-47f9-89d3-dabac5613e81", 00:09:12.468 "assigned_rate_limits": { 00:09:12.468 "rw_ios_per_sec": 0, 00:09:12.468 "rw_mbytes_per_sec": 0, 00:09:12.468 "r_mbytes_per_sec": 0, 00:09:12.468 "w_mbytes_per_sec": 0 00:09:12.468 }, 00:09:12.468 "claimed": false, 00:09:12.468 "zoned": false, 00:09:12.468 "supported_io_types": { 00:09:12.468 "read": true, 00:09:12.468 "write": true, 00:09:12.468 "unmap": true, 00:09:12.468 "flush": true, 00:09:12.468 "reset": true, 00:09:12.468 "nvme_admin": false, 00:09:12.468 "nvme_io": false, 00:09:12.468 "nvme_io_md": false, 00:09:12.468 "write_zeroes": true, 00:09:12.468 "zcopy": false, 00:09:12.468 "get_zone_info": false, 00:09:12.468 "zone_management": false, 00:09:12.468 "zone_append": false, 00:09:12.468 "compare": false, 00:09:12.468 "compare_and_write": false, 00:09:12.468 "abort": false, 00:09:12.468 "seek_hole": false, 00:09:12.468 "seek_data": false, 00:09:12.468 "copy": false, 00:09:12.468 "nvme_iov_md": false 00:09:12.468 }, 00:09:12.468 "memory_domains": [ 00:09:12.468 { 00:09:12.468 "dma_device_id": "system", 00:09:12.468 "dma_device_type": 1 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.468 "dma_device_type": 2 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "dma_device_id": "system", 00:09:12.468 "dma_device_type": 1 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.468 "dma_device_type": 2 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "dma_device_id": "system", 00:09:12.468 "dma_device_type": 1 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.468 "dma_device_type": 2 00:09:12.468 } 00:09:12.468 ], 00:09:12.468 "driver_specific": { 00:09:12.468 "raid": { 00:09:12.468 "uuid": "fd4f2b9f-3771-47f9-89d3-dabac5613e81", 00:09:12.468 "strip_size_kb": 64, 00:09:12.468 "state": "online", 00:09:12.468 "raid_level": "concat", 00:09:12.468 "superblock": false, 00:09:12.468 "num_base_bdevs": 3, 00:09:12.468 "num_base_bdevs_discovered": 3, 00:09:12.468 "num_base_bdevs_operational": 3, 00:09:12.468 "base_bdevs_list": [ 00:09:12.468 { 00:09:12.468 "name": "NewBaseBdev", 00:09:12.468 "uuid": "c5bacf31-b4f9-4390-af5b-0f78f3f48d30", 00:09:12.468 "is_configured": true, 00:09:12.468 "data_offset": 0, 00:09:12.468 "data_size": 65536 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "name": "BaseBdev2", 00:09:12.468 "uuid": "9e205f3e-f14e-4d39-a992-82b41bcd1aba", 00:09:12.468 "is_configured": true, 00:09:12.468 "data_offset": 0, 00:09:12.468 "data_size": 65536 00:09:12.468 }, 00:09:12.468 { 00:09:12.468 "name": "BaseBdev3", 00:09:12.468 "uuid": "0a369f7f-9316-400c-85f2-c2e0702ffa1b", 00:09:12.468 "is_configured": true, 00:09:12.468 "data_offset": 0, 00:09:12.468 "data_size": 65536 00:09:12.468 } 00:09:12.468 ] 00:09:12.468 } 00:09:12.468 } 00:09:12.468 }' 00:09:12.468 18:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.468 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.468 BaseBdev2 00:09:12.468 BaseBdev3' 00:09:12.468 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.728 [2024-11-28 18:49:42.186062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.728 [2024-11-28 18:49:42.186090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.728 [2024-11-28 18:49:42.186153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.728 [2024-11-28 18:49:42.186208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.728 [2024-11-28 18:49:42.186217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78282 00:09:12.728 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78282 ']' 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78282 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78282 00:09:12.729 killing process with pid 78282 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78282' 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78282 00:09:12.729 [2024-11-28 18:49:42.224568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.729 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78282 00:09:12.729 [2024-11-28 18:49:42.254616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.990 ************************************ 00:09:12.990 END TEST raid_state_function_test 00:09:12.990 ************************************ 00:09:12.990 00:09:12.990 real 0m8.253s 00:09:12.990 user 0m14.103s 00:09:12.990 sys 0m1.601s 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.990 18:49:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:12.990 18:49:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.990 18:49:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.990 18:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.990 ************************************ 00:09:12.990 START TEST raid_state_function_test_sb 00:09:12.990 ************************************ 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78881 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78881' 00:09:12.990 Process raid pid: 78881 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78881 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78881 ']' 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.990 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 [2024-11-28 18:49:42.639109] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:13.250 [2024-11-28 18:49:42.639348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.250 [2024-11-28 18:49:42.774684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.250 [2024-11-28 18:49:42.815022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.250 [2024-11-28 18:49:42.840243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.510 [2024-11-28 18:49:42.883086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.510 [2024-11-28 18:49:42.883199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 [2024-11-28 18:49:43.463138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.080 [2024-11-28 18:49:43.463189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.080 [2024-11-28 18:49:43.463203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.080 [2024-11-28 18:49:43.463211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.080 [2024-11-28 18:49:43.463223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.080 [2024-11-28 18:49:43.463230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.080 "name": "Existed_Raid", 00:09:14.080 "uuid": "ba00eb18-d1f4-4aec-97a8-a9e7399be768", 00:09:14.080 "strip_size_kb": 64, 00:09:14.080 "state": "configuring", 00:09:14.080 "raid_level": "concat", 00:09:14.080 "superblock": true, 00:09:14.080 "num_base_bdevs": 3, 00:09:14.080 "num_base_bdevs_discovered": 0, 00:09:14.080 "num_base_bdevs_operational": 3, 00:09:14.080 "base_bdevs_list": [ 00:09:14.080 { 00:09:14.080 "name": "BaseBdev1", 00:09:14.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.080 "is_configured": false, 00:09:14.080 "data_offset": 0, 00:09:14.080 "data_size": 0 00:09:14.080 }, 00:09:14.080 { 00:09:14.080 "name": "BaseBdev2", 00:09:14.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.080 "is_configured": false, 00:09:14.080 "data_offset": 0, 00:09:14.080 "data_size": 0 00:09:14.080 }, 00:09:14.080 { 00:09:14.080 "name": "BaseBdev3", 00:09:14.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.080 "is_configured": false, 00:09:14.080 "data_offset": 0, 00:09:14.080 "data_size": 0 00:09:14.080 } 00:09:14.080 ] 00:09:14.080 }' 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.080 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 [2024-11-28 18:49:43.859147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.341 [2024-11-28 18:49:43.859222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 [2024-11-28 18:49:43.871203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.341 [2024-11-28 18:49:43.871276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.341 [2024-11-28 18:49:43.871304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.341 [2024-11-28 18:49:43.871324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.341 [2024-11-28 18:49:43.871344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.341 [2024-11-28 18:49:43.871364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 [2024-11-28 18:49:43.891954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.341 BaseBdev1 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.341 [ 00:09:14.341 { 00:09:14.341 "name": "BaseBdev1", 00:09:14.341 "aliases": [ 00:09:14.341 "a4d94de6-678d-4330-91e3-79d765fd446a" 00:09:14.341 ], 00:09:14.341 "product_name": "Malloc disk", 00:09:14.341 "block_size": 512, 00:09:14.341 "num_blocks": 65536, 00:09:14.341 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:14.341 "assigned_rate_limits": { 00:09:14.341 "rw_ios_per_sec": 0, 00:09:14.341 "rw_mbytes_per_sec": 0, 00:09:14.341 "r_mbytes_per_sec": 0, 00:09:14.341 "w_mbytes_per_sec": 0 00:09:14.341 }, 00:09:14.341 "claimed": true, 00:09:14.341 "claim_type": "exclusive_write", 00:09:14.341 "zoned": false, 00:09:14.341 "supported_io_types": { 00:09:14.341 "read": true, 00:09:14.341 "write": true, 00:09:14.341 "unmap": true, 00:09:14.341 "flush": true, 00:09:14.341 "reset": true, 00:09:14.341 "nvme_admin": false, 00:09:14.341 "nvme_io": false, 00:09:14.341 "nvme_io_md": false, 00:09:14.341 "write_zeroes": true, 00:09:14.341 "zcopy": true, 00:09:14.341 "get_zone_info": false, 00:09:14.341 "zone_management": false, 00:09:14.341 "zone_append": false, 00:09:14.341 "compare": false, 00:09:14.341 "compare_and_write": false, 00:09:14.341 "abort": true, 00:09:14.341 "seek_hole": false, 00:09:14.341 "seek_data": false, 00:09:14.341 "copy": true, 00:09:14.341 "nvme_iov_md": false 00:09:14.341 }, 00:09:14.341 "memory_domains": [ 00:09:14.341 { 00:09:14.341 "dma_device_id": "system", 00:09:14.341 "dma_device_type": 1 00:09:14.341 }, 00:09:14.341 { 00:09:14.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.341 "dma_device_type": 2 00:09:14.341 } 00:09:14.341 ], 00:09:14.341 "driver_specific": {} 00:09:14.341 } 00:09:14.341 ] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.341 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.601 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.601 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.601 "name": "Existed_Raid", 00:09:14.601 "uuid": "91eab6bd-9226-4af1-8a1e-53d3a7111737", 00:09:14.601 "strip_size_kb": 64, 00:09:14.601 "state": "configuring", 00:09:14.601 "raid_level": "concat", 00:09:14.601 "superblock": true, 00:09:14.601 "num_base_bdevs": 3, 00:09:14.601 "num_base_bdevs_discovered": 1, 00:09:14.601 "num_base_bdevs_operational": 3, 00:09:14.601 "base_bdevs_list": [ 00:09:14.601 { 00:09:14.601 "name": "BaseBdev1", 00:09:14.601 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:14.601 "is_configured": true, 00:09:14.601 "data_offset": 2048, 00:09:14.601 "data_size": 63488 00:09:14.601 }, 00:09:14.601 { 00:09:14.601 "name": "BaseBdev2", 00:09:14.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.601 "is_configured": false, 00:09:14.601 "data_offset": 0, 00:09:14.601 "data_size": 0 00:09:14.601 }, 00:09:14.601 { 00:09:14.601 "name": "BaseBdev3", 00:09:14.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.601 "is_configured": false, 00:09:14.601 "data_offset": 0, 00:09:14.601 "data_size": 0 00:09:14.601 } 00:09:14.601 ] 00:09:14.601 }' 00:09:14.601 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.601 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 [2024-11-28 18:49:44.296082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.861 [2024-11-28 18:49:44.296131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 [2024-11-28 18:49:44.308133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.861 [2024-11-28 18:49:44.309958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.861 [2024-11-28 18:49:44.310028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.861 [2024-11-28 18:49:44.310061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.861 [2024-11-28 18:49:44.310081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.861 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.861 "name": "Existed_Raid", 00:09:14.861 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:14.861 "strip_size_kb": 64, 00:09:14.861 "state": "configuring", 00:09:14.861 "raid_level": "concat", 00:09:14.861 "superblock": true, 00:09:14.861 "num_base_bdevs": 3, 00:09:14.861 "num_base_bdevs_discovered": 1, 00:09:14.861 "num_base_bdevs_operational": 3, 00:09:14.861 "base_bdevs_list": [ 00:09:14.861 { 00:09:14.861 "name": "BaseBdev1", 00:09:14.861 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:14.862 "is_configured": true, 00:09:14.862 "data_offset": 2048, 00:09:14.862 "data_size": 63488 00:09:14.862 }, 00:09:14.862 { 00:09:14.862 "name": "BaseBdev2", 00:09:14.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.862 "is_configured": false, 00:09:14.862 "data_offset": 0, 00:09:14.862 "data_size": 0 00:09:14.862 }, 00:09:14.862 { 00:09:14.862 "name": "BaseBdev3", 00:09:14.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.862 "is_configured": false, 00:09:14.862 "data_offset": 0, 00:09:14.862 "data_size": 0 00:09:14.862 } 00:09:14.862 ] 00:09:14.862 }' 00:09:14.862 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.862 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 [2024-11-28 18:49:44.735488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.176 BaseBdev2 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 [ 00:09:15.176 { 00:09:15.176 "name": "BaseBdev2", 00:09:15.176 "aliases": [ 00:09:15.176 "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c" 00:09:15.176 ], 00:09:15.176 "product_name": "Malloc disk", 00:09:15.176 "block_size": 512, 00:09:15.176 "num_blocks": 65536, 00:09:15.176 "uuid": "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c", 00:09:15.176 "assigned_rate_limits": { 00:09:15.176 "rw_ios_per_sec": 0, 00:09:15.176 "rw_mbytes_per_sec": 0, 00:09:15.176 "r_mbytes_per_sec": 0, 00:09:15.176 "w_mbytes_per_sec": 0 00:09:15.176 }, 00:09:15.176 "claimed": true, 00:09:15.176 "claim_type": "exclusive_write", 00:09:15.176 "zoned": false, 00:09:15.176 "supported_io_types": { 00:09:15.176 "read": true, 00:09:15.176 "write": true, 00:09:15.176 "unmap": true, 00:09:15.176 "flush": true, 00:09:15.176 "reset": true, 00:09:15.176 "nvme_admin": false, 00:09:15.176 "nvme_io": false, 00:09:15.176 "nvme_io_md": false, 00:09:15.176 "write_zeroes": true, 00:09:15.176 "zcopy": true, 00:09:15.176 "get_zone_info": false, 00:09:15.176 "zone_management": false, 00:09:15.176 "zone_append": false, 00:09:15.176 "compare": false, 00:09:15.176 "compare_and_write": false, 00:09:15.176 "abort": true, 00:09:15.176 "seek_hole": false, 00:09:15.176 "seek_data": false, 00:09:15.176 "copy": true, 00:09:15.176 "nvme_iov_md": false 00:09:15.176 }, 00:09:15.176 "memory_domains": [ 00:09:15.176 { 00:09:15.176 "dma_device_id": "system", 00:09:15.176 "dma_device_type": 1 00:09:15.176 }, 00:09:15.176 { 00:09:15.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.176 "dma_device_type": 2 00:09:15.176 } 00:09:15.176 ], 00:09:15.176 "driver_specific": {} 00:09:15.176 } 00:09:15.176 ] 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.176 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.436 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.436 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.436 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.436 "name": "Existed_Raid", 00:09:15.436 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:15.436 "strip_size_kb": 64, 00:09:15.436 "state": "configuring", 00:09:15.436 "raid_level": "concat", 00:09:15.436 "superblock": true, 00:09:15.436 "num_base_bdevs": 3, 00:09:15.436 "num_base_bdevs_discovered": 2, 00:09:15.436 "num_base_bdevs_operational": 3, 00:09:15.436 "base_bdevs_list": [ 00:09:15.436 { 00:09:15.436 "name": "BaseBdev1", 00:09:15.436 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:15.436 "is_configured": true, 00:09:15.436 "data_offset": 2048, 00:09:15.436 "data_size": 63488 00:09:15.436 }, 00:09:15.436 { 00:09:15.436 "name": "BaseBdev2", 00:09:15.436 "uuid": "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c", 00:09:15.436 "is_configured": true, 00:09:15.436 "data_offset": 2048, 00:09:15.436 "data_size": 63488 00:09:15.436 }, 00:09:15.436 { 00:09:15.436 "name": "BaseBdev3", 00:09:15.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.436 "is_configured": false, 00:09:15.436 "data_offset": 0, 00:09:15.436 "data_size": 0 00:09:15.436 } 00:09:15.436 ] 00:09:15.436 }' 00:09:15.436 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.436 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 [2024-11-28 18:49:45.232585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.696 [2024-11-28 18:49:45.233298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:15.696 BaseBdev3 00:09:15.696 [2024-11-28 18:49:45.233533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.696 [2024-11-28 18:49:45.234591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.696 [2024-11-28 18:49:45.235041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:15.696 [2024-11-28 18:49:45.235092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.696 [2024-11-28 18:49:45.235777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 [ 00:09:15.696 { 00:09:15.696 "name": "BaseBdev3", 00:09:15.696 "aliases": [ 00:09:15.696 "2032b3c8-323d-4b4d-a056-dfbf52f1e1b0" 00:09:15.696 ], 00:09:15.696 "product_name": "Malloc disk", 00:09:15.696 "block_size": 512, 00:09:15.696 "num_blocks": 65536, 00:09:15.696 "uuid": "2032b3c8-323d-4b4d-a056-dfbf52f1e1b0", 00:09:15.696 "assigned_rate_limits": { 00:09:15.696 "rw_ios_per_sec": 0, 00:09:15.696 "rw_mbytes_per_sec": 0, 00:09:15.696 "r_mbytes_per_sec": 0, 00:09:15.696 "w_mbytes_per_sec": 0 00:09:15.696 }, 00:09:15.696 "claimed": true, 00:09:15.696 "claim_type": "exclusive_write", 00:09:15.696 "zoned": false, 00:09:15.696 "supported_io_types": { 00:09:15.696 "read": true, 00:09:15.696 "write": true, 00:09:15.696 "unmap": true, 00:09:15.696 "flush": true, 00:09:15.696 "reset": true, 00:09:15.696 "nvme_admin": false, 00:09:15.696 "nvme_io": false, 00:09:15.696 "nvme_io_md": false, 00:09:15.696 "write_zeroes": true, 00:09:15.696 "zcopy": true, 00:09:15.696 "get_zone_info": false, 00:09:15.696 "zone_management": false, 00:09:15.696 "zone_append": false, 00:09:15.696 "compare": false, 00:09:15.696 "compare_and_write": false, 00:09:15.696 "abort": true, 00:09:15.696 "seek_hole": false, 00:09:15.696 "seek_data": false, 00:09:15.696 "copy": true, 00:09:15.696 "nvme_iov_md": false 00:09:15.696 }, 00:09:15.696 "memory_domains": [ 00:09:15.696 { 00:09:15.696 "dma_device_id": "system", 00:09:15.696 "dma_device_type": 1 00:09:15.696 }, 00:09:15.696 { 00:09:15.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.696 "dma_device_type": 2 00:09:15.696 } 00:09:15.696 ], 00:09:15.696 "driver_specific": {} 00:09:15.696 } 00:09:15.696 ] 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.696 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.954 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.954 "name": "Existed_Raid", 00:09:15.954 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:15.954 "strip_size_kb": 64, 00:09:15.954 "state": "online", 00:09:15.954 "raid_level": "concat", 00:09:15.954 "superblock": true, 00:09:15.954 "num_base_bdevs": 3, 00:09:15.954 "num_base_bdevs_discovered": 3, 00:09:15.954 "num_base_bdevs_operational": 3, 00:09:15.954 "base_bdevs_list": [ 00:09:15.954 { 00:09:15.954 "name": "BaseBdev1", 00:09:15.954 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:15.954 "is_configured": true, 00:09:15.954 "data_offset": 2048, 00:09:15.954 "data_size": 63488 00:09:15.954 }, 00:09:15.954 { 00:09:15.954 "name": "BaseBdev2", 00:09:15.954 "uuid": "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c", 00:09:15.954 "is_configured": true, 00:09:15.954 "data_offset": 2048, 00:09:15.954 "data_size": 63488 00:09:15.954 }, 00:09:15.954 { 00:09:15.954 "name": "BaseBdev3", 00:09:15.954 "uuid": "2032b3c8-323d-4b4d-a056-dfbf52f1e1b0", 00:09:15.954 "is_configured": true, 00:09:15.954 "data_offset": 2048, 00:09:15.954 "data_size": 63488 00:09:15.954 } 00:09:15.954 ] 00:09:15.954 }' 00:09:15.954 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.954 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.214 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.215 [2024-11-28 18:49:45.736994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.215 "name": "Existed_Raid", 00:09:16.215 "aliases": [ 00:09:16.215 "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747" 00:09:16.215 ], 00:09:16.215 "product_name": "Raid Volume", 00:09:16.215 "block_size": 512, 00:09:16.215 "num_blocks": 190464, 00:09:16.215 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:16.215 "assigned_rate_limits": { 00:09:16.215 "rw_ios_per_sec": 0, 00:09:16.215 "rw_mbytes_per_sec": 0, 00:09:16.215 "r_mbytes_per_sec": 0, 00:09:16.215 "w_mbytes_per_sec": 0 00:09:16.215 }, 00:09:16.215 "claimed": false, 00:09:16.215 "zoned": false, 00:09:16.215 "supported_io_types": { 00:09:16.215 "read": true, 00:09:16.215 "write": true, 00:09:16.215 "unmap": true, 00:09:16.215 "flush": true, 00:09:16.215 "reset": true, 00:09:16.215 "nvme_admin": false, 00:09:16.215 "nvme_io": false, 00:09:16.215 "nvme_io_md": false, 00:09:16.215 "write_zeroes": true, 00:09:16.215 "zcopy": false, 00:09:16.215 "get_zone_info": false, 00:09:16.215 "zone_management": false, 00:09:16.215 "zone_append": false, 00:09:16.215 "compare": false, 00:09:16.215 "compare_and_write": false, 00:09:16.215 "abort": false, 00:09:16.215 "seek_hole": false, 00:09:16.215 "seek_data": false, 00:09:16.215 "copy": false, 00:09:16.215 "nvme_iov_md": false 00:09:16.215 }, 00:09:16.215 "memory_domains": [ 00:09:16.215 { 00:09:16.215 "dma_device_id": "system", 00:09:16.215 "dma_device_type": 1 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.215 "dma_device_type": 2 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "dma_device_id": "system", 00:09:16.215 "dma_device_type": 1 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.215 "dma_device_type": 2 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "dma_device_id": "system", 00:09:16.215 "dma_device_type": 1 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.215 "dma_device_type": 2 00:09:16.215 } 00:09:16.215 ], 00:09:16.215 "driver_specific": { 00:09:16.215 "raid": { 00:09:16.215 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:16.215 "strip_size_kb": 64, 00:09:16.215 "state": "online", 00:09:16.215 "raid_level": "concat", 00:09:16.215 "superblock": true, 00:09:16.215 "num_base_bdevs": 3, 00:09:16.215 "num_base_bdevs_discovered": 3, 00:09:16.215 "num_base_bdevs_operational": 3, 00:09:16.215 "base_bdevs_list": [ 00:09:16.215 { 00:09:16.215 "name": "BaseBdev1", 00:09:16.215 "uuid": "a4d94de6-678d-4330-91e3-79d765fd446a", 00:09:16.215 "is_configured": true, 00:09:16.215 "data_offset": 2048, 00:09:16.215 "data_size": 63488 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "name": "BaseBdev2", 00:09:16.215 "uuid": "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c", 00:09:16.215 "is_configured": true, 00:09:16.215 "data_offset": 2048, 00:09:16.215 "data_size": 63488 00:09:16.215 }, 00:09:16.215 { 00:09:16.215 "name": "BaseBdev3", 00:09:16.215 "uuid": "2032b3c8-323d-4b4d-a056-dfbf52f1e1b0", 00:09:16.215 "is_configured": true, 00:09:16.215 "data_offset": 2048, 00:09:16.215 "data_size": 63488 00:09:16.215 } 00:09:16.215 ] 00:09:16.215 } 00:09:16.215 } 00:09:16.215 }' 00:09:16.215 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.475 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.475 BaseBdev2 00:09:16.475 BaseBdev3' 00:09:16.475 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.475 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.475 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.475 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.476 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 [2024-11-28 18:49:46.000866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.476 [2024-11-28 18:49:46.000893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.476 [2024-11-28 18:49:46.000949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.476 "name": "Existed_Raid", 00:09:16.476 "uuid": "ab656c5d-5c0e-480b-8c4a-1ca56f3cf747", 00:09:16.476 "strip_size_kb": 64, 00:09:16.476 "state": "offline", 00:09:16.476 "raid_level": "concat", 00:09:16.476 "superblock": true, 00:09:16.476 "num_base_bdevs": 3, 00:09:16.476 "num_base_bdevs_discovered": 2, 00:09:16.476 "num_base_bdevs_operational": 2, 00:09:16.476 "base_bdevs_list": [ 00:09:16.476 { 00:09:16.476 "name": null, 00:09:16.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.476 "is_configured": false, 00:09:16.476 "data_offset": 0, 00:09:16.476 "data_size": 63488 00:09:16.476 }, 00:09:16.476 { 00:09:16.476 "name": "BaseBdev2", 00:09:16.476 "uuid": "1c7e81ac-2a18-4c10-80c2-c7f9dcade84c", 00:09:16.476 "is_configured": true, 00:09:16.476 "data_offset": 2048, 00:09:16.476 "data_size": 63488 00:09:16.476 }, 00:09:16.476 { 00:09:16.476 "name": "BaseBdev3", 00:09:16.476 "uuid": "2032b3c8-323d-4b4d-a056-dfbf52f1e1b0", 00:09:16.476 "is_configured": true, 00:09:16.476 "data_offset": 2048, 00:09:16.476 "data_size": 63488 00:09:16.476 } 00:09:16.476 ] 00:09:16.476 }' 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.476 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.045 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 [2024-11-28 18:49:46.492146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 [2024-11-28 18:49:46.559251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.046 [2024-11-28 18:49:46.559303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 BaseBdev2 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.046 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.306 [ 00:09:17.306 { 00:09:17.306 "name": "BaseBdev2", 00:09:17.306 "aliases": [ 00:09:17.306 "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a" 00:09:17.306 ], 00:09:17.306 "product_name": "Malloc disk", 00:09:17.306 "block_size": 512, 00:09:17.306 "num_blocks": 65536, 00:09:17.306 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:17.306 "assigned_rate_limits": { 00:09:17.306 "rw_ios_per_sec": 0, 00:09:17.306 "rw_mbytes_per_sec": 0, 00:09:17.306 "r_mbytes_per_sec": 0, 00:09:17.306 "w_mbytes_per_sec": 0 00:09:17.306 }, 00:09:17.306 "claimed": false, 00:09:17.306 "zoned": false, 00:09:17.306 "supported_io_types": { 00:09:17.306 "read": true, 00:09:17.306 "write": true, 00:09:17.306 "unmap": true, 00:09:17.306 "flush": true, 00:09:17.306 "reset": true, 00:09:17.306 "nvme_admin": false, 00:09:17.306 "nvme_io": false, 00:09:17.306 "nvme_io_md": false, 00:09:17.306 "write_zeroes": true, 00:09:17.306 "zcopy": true, 00:09:17.306 "get_zone_info": false, 00:09:17.306 "zone_management": false, 00:09:17.306 "zone_append": false, 00:09:17.306 "compare": false, 00:09:17.306 "compare_and_write": false, 00:09:17.306 "abort": true, 00:09:17.306 "seek_hole": false, 00:09:17.306 "seek_data": false, 00:09:17.306 "copy": true, 00:09:17.306 "nvme_iov_md": false 00:09:17.306 }, 00:09:17.306 "memory_domains": [ 00:09:17.306 { 00:09:17.306 "dma_device_id": "system", 00:09:17.306 "dma_device_type": 1 00:09:17.306 }, 00:09:17.306 { 00:09:17.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.306 "dma_device_type": 2 00:09:17.306 } 00:09:17.306 ], 00:09:17.306 "driver_specific": {} 00:09:17.306 } 00:09:17.306 ] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.306 BaseBdev3 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.306 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.306 [ 00:09:17.306 { 00:09:17.306 "name": "BaseBdev3", 00:09:17.306 "aliases": [ 00:09:17.306 "6508af2a-cc62-4f42-974f-4debb2e6f96e" 00:09:17.306 ], 00:09:17.306 "product_name": "Malloc disk", 00:09:17.306 "block_size": 512, 00:09:17.306 "num_blocks": 65536, 00:09:17.306 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:17.306 "assigned_rate_limits": { 00:09:17.306 "rw_ios_per_sec": 0, 00:09:17.306 "rw_mbytes_per_sec": 0, 00:09:17.306 "r_mbytes_per_sec": 0, 00:09:17.306 "w_mbytes_per_sec": 0 00:09:17.306 }, 00:09:17.306 "claimed": false, 00:09:17.306 "zoned": false, 00:09:17.306 "supported_io_types": { 00:09:17.306 "read": true, 00:09:17.307 "write": true, 00:09:17.307 "unmap": true, 00:09:17.307 "flush": true, 00:09:17.307 "reset": true, 00:09:17.307 "nvme_admin": false, 00:09:17.307 "nvme_io": false, 00:09:17.307 "nvme_io_md": false, 00:09:17.307 "write_zeroes": true, 00:09:17.307 "zcopy": true, 00:09:17.307 "get_zone_info": false, 00:09:17.307 "zone_management": false, 00:09:17.307 "zone_append": false, 00:09:17.307 "compare": false, 00:09:17.307 "compare_and_write": false, 00:09:17.307 "abort": true, 00:09:17.307 "seek_hole": false, 00:09:17.307 "seek_data": false, 00:09:17.307 "copy": true, 00:09:17.307 "nvme_iov_md": false 00:09:17.307 }, 00:09:17.307 "memory_domains": [ 00:09:17.307 { 00:09:17.307 "dma_device_id": "system", 00:09:17.307 "dma_device_type": 1 00:09:17.307 }, 00:09:17.307 { 00:09:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.307 "dma_device_type": 2 00:09:17.307 } 00:09:17.307 ], 00:09:17.307 "driver_specific": {} 00:09:17.307 } 00:09:17.307 ] 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.307 [2024-11-28 18:49:46.734134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.307 [2024-11-28 18:49:46.734181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.307 [2024-11-28 18:49:46.734198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.307 [2024-11-28 18:49:46.735975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.307 "name": "Existed_Raid", 00:09:17.307 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:17.307 "strip_size_kb": 64, 00:09:17.307 "state": "configuring", 00:09:17.307 "raid_level": "concat", 00:09:17.307 "superblock": true, 00:09:17.307 "num_base_bdevs": 3, 00:09:17.307 "num_base_bdevs_discovered": 2, 00:09:17.307 "num_base_bdevs_operational": 3, 00:09:17.307 "base_bdevs_list": [ 00:09:17.307 { 00:09:17.307 "name": "BaseBdev1", 00:09:17.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.307 "is_configured": false, 00:09:17.307 "data_offset": 0, 00:09:17.307 "data_size": 0 00:09:17.307 }, 00:09:17.307 { 00:09:17.307 "name": "BaseBdev2", 00:09:17.307 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:17.307 "is_configured": true, 00:09:17.307 "data_offset": 2048, 00:09:17.307 "data_size": 63488 00:09:17.307 }, 00:09:17.307 { 00:09:17.307 "name": "BaseBdev3", 00:09:17.307 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:17.307 "is_configured": true, 00:09:17.307 "data_offset": 2048, 00:09:17.307 "data_size": 63488 00:09:17.307 } 00:09:17.307 ] 00:09:17.307 }' 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.307 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.566 [2024-11-28 18:49:47.162228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.566 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.825 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.825 "name": "Existed_Raid", 00:09:17.825 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:17.825 "strip_size_kb": 64, 00:09:17.825 "state": "configuring", 00:09:17.825 "raid_level": "concat", 00:09:17.825 "superblock": true, 00:09:17.825 "num_base_bdevs": 3, 00:09:17.825 "num_base_bdevs_discovered": 1, 00:09:17.825 "num_base_bdevs_operational": 3, 00:09:17.825 "base_bdevs_list": [ 00:09:17.825 { 00:09:17.825 "name": "BaseBdev1", 00:09:17.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.825 "is_configured": false, 00:09:17.825 "data_offset": 0, 00:09:17.825 "data_size": 0 00:09:17.825 }, 00:09:17.825 { 00:09:17.825 "name": null, 00:09:17.825 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:17.825 "is_configured": false, 00:09:17.825 "data_offset": 0, 00:09:17.825 "data_size": 63488 00:09:17.825 }, 00:09:17.825 { 00:09:17.825 "name": "BaseBdev3", 00:09:17.825 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:17.825 "is_configured": true, 00:09:17.825 "data_offset": 2048, 00:09:17.825 "data_size": 63488 00:09:17.825 } 00:09:17.826 ] 00:09:17.826 }' 00:09:17.826 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.826 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.085 [2024-11-28 18:49:47.617267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.085 BaseBdev1 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.085 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.086 [ 00:09:18.086 { 00:09:18.086 "name": "BaseBdev1", 00:09:18.086 "aliases": [ 00:09:18.086 "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0" 00:09:18.086 ], 00:09:18.086 "product_name": "Malloc disk", 00:09:18.086 "block_size": 512, 00:09:18.086 "num_blocks": 65536, 00:09:18.086 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:18.086 "assigned_rate_limits": { 00:09:18.086 "rw_ios_per_sec": 0, 00:09:18.086 "rw_mbytes_per_sec": 0, 00:09:18.086 "r_mbytes_per_sec": 0, 00:09:18.086 "w_mbytes_per_sec": 0 00:09:18.086 }, 00:09:18.086 "claimed": true, 00:09:18.086 "claim_type": "exclusive_write", 00:09:18.086 "zoned": false, 00:09:18.086 "supported_io_types": { 00:09:18.086 "read": true, 00:09:18.086 "write": true, 00:09:18.086 "unmap": true, 00:09:18.086 "flush": true, 00:09:18.086 "reset": true, 00:09:18.086 "nvme_admin": false, 00:09:18.086 "nvme_io": false, 00:09:18.086 "nvme_io_md": false, 00:09:18.086 "write_zeroes": true, 00:09:18.086 "zcopy": true, 00:09:18.086 "get_zone_info": false, 00:09:18.086 "zone_management": false, 00:09:18.086 "zone_append": false, 00:09:18.086 "compare": false, 00:09:18.086 "compare_and_write": false, 00:09:18.086 "abort": true, 00:09:18.086 "seek_hole": false, 00:09:18.086 "seek_data": false, 00:09:18.086 "copy": true, 00:09:18.086 "nvme_iov_md": false 00:09:18.086 }, 00:09:18.086 "memory_domains": [ 00:09:18.086 { 00:09:18.086 "dma_device_id": "system", 00:09:18.086 "dma_device_type": 1 00:09:18.086 }, 00:09:18.086 { 00:09:18.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.086 "dma_device_type": 2 00:09:18.086 } 00:09:18.086 ], 00:09:18.086 "driver_specific": {} 00:09:18.086 } 00:09:18.086 ] 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.086 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.346 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.346 "name": "Existed_Raid", 00:09:18.346 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:18.346 "strip_size_kb": 64, 00:09:18.346 "state": "configuring", 00:09:18.346 "raid_level": "concat", 00:09:18.346 "superblock": true, 00:09:18.346 "num_base_bdevs": 3, 00:09:18.346 "num_base_bdevs_discovered": 2, 00:09:18.346 "num_base_bdevs_operational": 3, 00:09:18.346 "base_bdevs_list": [ 00:09:18.346 { 00:09:18.346 "name": "BaseBdev1", 00:09:18.346 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:18.346 "is_configured": true, 00:09:18.346 "data_offset": 2048, 00:09:18.346 "data_size": 63488 00:09:18.346 }, 00:09:18.346 { 00:09:18.346 "name": null, 00:09:18.346 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:18.346 "is_configured": false, 00:09:18.346 "data_offset": 0, 00:09:18.346 "data_size": 63488 00:09:18.346 }, 00:09:18.346 { 00:09:18.346 "name": "BaseBdev3", 00:09:18.346 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:18.346 "is_configured": true, 00:09:18.346 "data_offset": 2048, 00:09:18.346 "data_size": 63488 00:09:18.346 } 00:09:18.346 ] 00:09:18.346 }' 00:09:18.346 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.346 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.606 [2024-11-28 18:49:48.109448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.606 "name": "Existed_Raid", 00:09:18.606 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:18.606 "strip_size_kb": 64, 00:09:18.606 "state": "configuring", 00:09:18.606 "raid_level": "concat", 00:09:18.606 "superblock": true, 00:09:18.606 "num_base_bdevs": 3, 00:09:18.606 "num_base_bdevs_discovered": 1, 00:09:18.606 "num_base_bdevs_operational": 3, 00:09:18.606 "base_bdevs_list": [ 00:09:18.606 { 00:09:18.606 "name": "BaseBdev1", 00:09:18.606 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:18.606 "is_configured": true, 00:09:18.606 "data_offset": 2048, 00:09:18.606 "data_size": 63488 00:09:18.606 }, 00:09:18.606 { 00:09:18.606 "name": null, 00:09:18.606 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:18.606 "is_configured": false, 00:09:18.606 "data_offset": 0, 00:09:18.606 "data_size": 63488 00:09:18.606 }, 00:09:18.606 { 00:09:18.606 "name": null, 00:09:18.606 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:18.606 "is_configured": false, 00:09:18.606 "data_offset": 0, 00:09:18.606 "data_size": 63488 00:09:18.606 } 00:09:18.606 ] 00:09:18.606 }' 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.606 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.175 [2024-11-28 18:49:48.609644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.175 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.176 "name": "Existed_Raid", 00:09:19.176 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:19.176 "strip_size_kb": 64, 00:09:19.176 "state": "configuring", 00:09:19.176 "raid_level": "concat", 00:09:19.176 "superblock": true, 00:09:19.176 "num_base_bdevs": 3, 00:09:19.176 "num_base_bdevs_discovered": 2, 00:09:19.176 "num_base_bdevs_operational": 3, 00:09:19.176 "base_bdevs_list": [ 00:09:19.176 { 00:09:19.176 "name": "BaseBdev1", 00:09:19.176 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:19.176 "is_configured": true, 00:09:19.176 "data_offset": 2048, 00:09:19.176 "data_size": 63488 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "name": null, 00:09:19.176 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:19.176 "is_configured": false, 00:09:19.176 "data_offset": 0, 00:09:19.176 "data_size": 63488 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "name": "BaseBdev3", 00:09:19.176 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:19.176 "is_configured": true, 00:09:19.176 "data_offset": 2048, 00:09:19.176 "data_size": 63488 00:09:19.176 } 00:09:19.176 ] 00:09:19.176 }' 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.176 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.744 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.745 [2024-11-28 18:49:49.117783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.745 "name": "Existed_Raid", 00:09:19.745 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:19.745 "strip_size_kb": 64, 00:09:19.745 "state": "configuring", 00:09:19.745 "raid_level": "concat", 00:09:19.745 "superblock": true, 00:09:19.745 "num_base_bdevs": 3, 00:09:19.745 "num_base_bdevs_discovered": 1, 00:09:19.745 "num_base_bdevs_operational": 3, 00:09:19.745 "base_bdevs_list": [ 00:09:19.745 { 00:09:19.745 "name": null, 00:09:19.745 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:19.745 "is_configured": false, 00:09:19.745 "data_offset": 0, 00:09:19.745 "data_size": 63488 00:09:19.745 }, 00:09:19.745 { 00:09:19.745 "name": null, 00:09:19.745 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:19.745 "is_configured": false, 00:09:19.745 "data_offset": 0, 00:09:19.745 "data_size": 63488 00:09:19.745 }, 00:09:19.745 { 00:09:19.745 "name": "BaseBdev3", 00:09:19.745 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:19.745 "is_configured": true, 00:09:19.745 "data_offset": 2048, 00:09:19.745 "data_size": 63488 00:09:19.745 } 00:09:19.745 ] 00:09:19.745 }' 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.745 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.004 [2024-11-28 18:49:49.596267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.004 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.005 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.005 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.005 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.005 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.264 "name": "Existed_Raid", 00:09:20.264 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:20.264 "strip_size_kb": 64, 00:09:20.264 "state": "configuring", 00:09:20.264 "raid_level": "concat", 00:09:20.264 "superblock": true, 00:09:20.264 "num_base_bdevs": 3, 00:09:20.264 "num_base_bdevs_discovered": 2, 00:09:20.264 "num_base_bdevs_operational": 3, 00:09:20.264 "base_bdevs_list": [ 00:09:20.264 { 00:09:20.264 "name": null, 00:09:20.264 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:20.264 "is_configured": false, 00:09:20.264 "data_offset": 0, 00:09:20.264 "data_size": 63488 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "name": "BaseBdev2", 00:09:20.264 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:20.264 "is_configured": true, 00:09:20.264 "data_offset": 2048, 00:09:20.264 "data_size": 63488 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "name": "BaseBdev3", 00:09:20.264 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:20.264 "is_configured": true, 00:09:20.264 "data_offset": 2048, 00:09:20.264 "data_size": 63488 00:09:20.264 } 00:09:20.264 ] 00:09:20.264 }' 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.264 18:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 [2024-11-28 18:49:50.155311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.784 [2024-11-28 18:49:50.155584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.784 [2024-11-28 18:49:50.155634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.784 NewBaseBdev 00:09:20.784 [2024-11-28 18:49:50.155915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:20.784 [2024-11-28 18:49:50.156040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.784 [2024-11-28 18:49:50.156103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.784 [2024-11-28 18:49:50.156243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.784 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 [ 00:09:20.784 { 00:09:20.784 "name": "NewBaseBdev", 00:09:20.784 "aliases": [ 00:09:20.784 "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0" 00:09:20.784 ], 00:09:20.784 "product_name": "Malloc disk", 00:09:20.784 "block_size": 512, 00:09:20.784 "num_blocks": 65536, 00:09:20.784 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:20.784 "assigned_rate_limits": { 00:09:20.784 "rw_ios_per_sec": 0, 00:09:20.784 "rw_mbytes_per_sec": 0, 00:09:20.784 "r_mbytes_per_sec": 0, 00:09:20.784 "w_mbytes_per_sec": 0 00:09:20.784 }, 00:09:20.784 "claimed": true, 00:09:20.785 "claim_type": "exclusive_write", 00:09:20.785 "zoned": false, 00:09:20.785 "supported_io_types": { 00:09:20.785 "read": true, 00:09:20.785 "write": true, 00:09:20.785 "unmap": true, 00:09:20.785 "flush": true, 00:09:20.785 "reset": true, 00:09:20.785 "nvme_admin": false, 00:09:20.785 "nvme_io": false, 00:09:20.785 "nvme_io_md": false, 00:09:20.785 "write_zeroes": true, 00:09:20.785 "zcopy": true, 00:09:20.785 "get_zone_info": false, 00:09:20.785 "zone_management": false, 00:09:20.785 "zone_append": false, 00:09:20.785 "compare": false, 00:09:20.785 "compare_and_write": false, 00:09:20.785 "abort": true, 00:09:20.785 "seek_hole": false, 00:09:20.785 "seek_data": false, 00:09:20.785 "copy": true, 00:09:20.785 "nvme_iov_md": false 00:09:20.785 }, 00:09:20.785 "memory_domains": [ 00:09:20.785 { 00:09:20.785 "dma_device_id": "system", 00:09:20.785 "dma_device_type": 1 00:09:20.785 }, 00:09:20.785 { 00:09:20.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.785 "dma_device_type": 2 00:09:20.785 } 00:09:20.785 ], 00:09:20.785 "driver_specific": {} 00:09:20.785 } 00:09:20.785 ] 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.785 "name": "Existed_Raid", 00:09:20.785 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:20.785 "strip_size_kb": 64, 00:09:20.785 "state": "online", 00:09:20.785 "raid_level": "concat", 00:09:20.785 "superblock": true, 00:09:20.785 "num_base_bdevs": 3, 00:09:20.785 "num_base_bdevs_discovered": 3, 00:09:20.785 "num_base_bdevs_operational": 3, 00:09:20.785 "base_bdevs_list": [ 00:09:20.785 { 00:09:20.785 "name": "NewBaseBdev", 00:09:20.785 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:20.785 "is_configured": true, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 }, 00:09:20.785 { 00:09:20.785 "name": "BaseBdev2", 00:09:20.785 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:20.785 "is_configured": true, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 }, 00:09:20.785 { 00:09:20.785 "name": "BaseBdev3", 00:09:20.785 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:20.785 "is_configured": true, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 } 00:09:20.785 ] 00:09:20.785 }' 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.785 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.045 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.045 [2024-11-28 18:49:50.627772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.305 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.305 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.305 "name": "Existed_Raid", 00:09:21.305 "aliases": [ 00:09:21.305 "eab131fc-c69e-4a69-b361-a89067e0dabd" 00:09:21.305 ], 00:09:21.305 "product_name": "Raid Volume", 00:09:21.305 "block_size": 512, 00:09:21.305 "num_blocks": 190464, 00:09:21.305 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:21.305 "assigned_rate_limits": { 00:09:21.305 "rw_ios_per_sec": 0, 00:09:21.305 "rw_mbytes_per_sec": 0, 00:09:21.305 "r_mbytes_per_sec": 0, 00:09:21.305 "w_mbytes_per_sec": 0 00:09:21.305 }, 00:09:21.305 "claimed": false, 00:09:21.305 "zoned": false, 00:09:21.305 "supported_io_types": { 00:09:21.305 "read": true, 00:09:21.305 "write": true, 00:09:21.305 "unmap": true, 00:09:21.305 "flush": true, 00:09:21.305 "reset": true, 00:09:21.305 "nvme_admin": false, 00:09:21.305 "nvme_io": false, 00:09:21.305 "nvme_io_md": false, 00:09:21.305 "write_zeroes": true, 00:09:21.305 "zcopy": false, 00:09:21.305 "get_zone_info": false, 00:09:21.305 "zone_management": false, 00:09:21.305 "zone_append": false, 00:09:21.305 "compare": false, 00:09:21.305 "compare_and_write": false, 00:09:21.305 "abort": false, 00:09:21.305 "seek_hole": false, 00:09:21.305 "seek_data": false, 00:09:21.305 "copy": false, 00:09:21.305 "nvme_iov_md": false 00:09:21.305 }, 00:09:21.305 "memory_domains": [ 00:09:21.305 { 00:09:21.305 "dma_device_id": "system", 00:09:21.305 "dma_device_type": 1 00:09:21.305 }, 00:09:21.305 { 00:09:21.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.305 "dma_device_type": 2 00:09:21.305 }, 00:09:21.305 { 00:09:21.305 "dma_device_id": "system", 00:09:21.305 "dma_device_type": 1 00:09:21.305 }, 00:09:21.305 { 00:09:21.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.305 "dma_device_type": 2 00:09:21.305 }, 00:09:21.305 { 00:09:21.305 "dma_device_id": "system", 00:09:21.305 "dma_device_type": 1 00:09:21.305 }, 00:09:21.305 { 00:09:21.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.305 "dma_device_type": 2 00:09:21.305 } 00:09:21.305 ], 00:09:21.305 "driver_specific": { 00:09:21.305 "raid": { 00:09:21.305 "uuid": "eab131fc-c69e-4a69-b361-a89067e0dabd", 00:09:21.305 "strip_size_kb": 64, 00:09:21.305 "state": "online", 00:09:21.305 "raid_level": "concat", 00:09:21.305 "superblock": true, 00:09:21.305 "num_base_bdevs": 3, 00:09:21.305 "num_base_bdevs_discovered": 3, 00:09:21.306 "num_base_bdevs_operational": 3, 00:09:21.306 "base_bdevs_list": [ 00:09:21.306 { 00:09:21.306 "name": "NewBaseBdev", 00:09:21.306 "uuid": "3d1e2ef4-64fa-41c8-ba4c-8cf4c687d7c0", 00:09:21.306 "is_configured": true, 00:09:21.306 "data_offset": 2048, 00:09:21.306 "data_size": 63488 00:09:21.306 }, 00:09:21.306 { 00:09:21.306 "name": "BaseBdev2", 00:09:21.306 "uuid": "bc4ba7e3-bc3a-4cde-9351-60fb66eb3e6a", 00:09:21.306 "is_configured": true, 00:09:21.306 "data_offset": 2048, 00:09:21.306 "data_size": 63488 00:09:21.306 }, 00:09:21.306 { 00:09:21.306 "name": "BaseBdev3", 00:09:21.306 "uuid": "6508af2a-cc62-4f42-974f-4debb2e6f96e", 00:09:21.306 "is_configured": true, 00:09:21.306 "data_offset": 2048, 00:09:21.306 "data_size": 63488 00:09:21.306 } 00:09:21.306 ] 00:09:21.306 } 00:09:21.306 } 00:09:21.306 }' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.306 BaseBdev2 00:09:21.306 BaseBdev3' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 [2024-11-28 18:49:50.883571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.306 [2024-11-28 18:49:50.883597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.306 [2024-11-28 18:49:50.883656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.306 [2024-11-28 18:49:50.883710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.306 [2024-11-28 18:49:50.883718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78881 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78881 ']' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78881 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.306 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78881 00:09:21.566 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.566 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.567 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78881' 00:09:21.567 killing process with pid 78881 00:09:21.567 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78881 00:09:21.567 [2024-11-28 18:49:50.931176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.567 18:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78881 00:09:21.567 [2024-11-28 18:49:50.961395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.827 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:21.827 00:09:21.827 real 0m8.635s 00:09:21.827 user 0m14.797s 00:09:21.827 sys 0m1.664s 00:09:21.827 18:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.827 18:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.827 ************************************ 00:09:21.827 END TEST raid_state_function_test_sb 00:09:21.827 ************************************ 00:09:21.827 18:49:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:21.827 18:49:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:21.827 18:49:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.827 18:49:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.827 ************************************ 00:09:21.827 START TEST raid_superblock_test 00:09:21.827 ************************************ 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79479 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79479 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79479 ']' 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.827 18:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.827 [2024-11-28 18:49:51.347444] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:21.827 [2024-11-28 18:49:51.347657] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79479 ] 00:09:22.087 [2024-11-28 18:49:51.481804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:22.087 [2024-11-28 18:49:51.519907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.087 [2024-11-28 18:49:51.544549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.087 [2024-11-28 18:49:51.585801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.087 [2024-11-28 18:49:51.585840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.662 malloc1 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.662 [2024-11-28 18:49:52.182091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:22.662 [2024-11-28 18:49:52.182206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.662 [2024-11-28 18:49:52.182263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:22.662 [2024-11-28 18:49:52.182302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.662 [2024-11-28 18:49:52.184395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.662 [2024-11-28 18:49:52.184478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:22.662 pt1 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.662 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 malloc2 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-28 18:49:52.214496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.663 [2024-11-28 18:49:52.214596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.663 [2024-11-28 18:49:52.214617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:22.663 [2024-11-28 18:49:52.214625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.663 [2024-11-28 18:49:52.216712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.663 [2024-11-28 18:49:52.216747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.663 pt2 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 malloc3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-28 18:49:52.242825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.663 [2024-11-28 18:49:52.242925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.663 [2024-11-28 18:49:52.242961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:22.663 [2024-11-28 18:49:52.242988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.663 [2024-11-28 18:49:52.245013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.663 [2024-11-28 18:49:52.245094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.663 pt3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-28 18:49:52.254878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.663 [2024-11-28 18:49:52.256774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.663 [2024-11-28 18:49:52.256886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.663 [2024-11-28 18:49:52.257060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:22.663 [2024-11-28 18:49:52.257116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.663 [2024-11-28 18:49:52.257377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.663 [2024-11-28 18:49:52.257566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:22.663 [2024-11-28 18:49:52.257608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:22.663 [2024-11-28 18:49:52.257767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.663 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.923 "name": "raid_bdev1", 00:09:22.923 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:22.923 "strip_size_kb": 64, 00:09:22.923 "state": "online", 00:09:22.923 "raid_level": "concat", 00:09:22.923 "superblock": true, 00:09:22.923 "num_base_bdevs": 3, 00:09:22.923 "num_base_bdevs_discovered": 3, 00:09:22.923 "num_base_bdevs_operational": 3, 00:09:22.923 "base_bdevs_list": [ 00:09:22.923 { 00:09:22.923 "name": "pt1", 00:09:22.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.923 "is_configured": true, 00:09:22.923 "data_offset": 2048, 00:09:22.923 "data_size": 63488 00:09:22.923 }, 00:09:22.923 { 00:09:22.923 "name": "pt2", 00:09:22.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.923 "is_configured": true, 00:09:22.923 "data_offset": 2048, 00:09:22.923 "data_size": 63488 00:09:22.923 }, 00:09:22.923 { 00:09:22.923 "name": "pt3", 00:09:22.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.923 "is_configured": true, 00:09:22.923 "data_offset": 2048, 00:09:22.923 "data_size": 63488 00:09:22.923 } 00:09:22.923 ] 00:09:22.923 }' 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.923 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.183 [2024-11-28 18:49:52.727279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.183 "name": "raid_bdev1", 00:09:23.183 "aliases": [ 00:09:23.183 "90a8c320-2fc2-48c6-b879-3d77195e59a4" 00:09:23.183 ], 00:09:23.183 "product_name": "Raid Volume", 00:09:23.183 "block_size": 512, 00:09:23.183 "num_blocks": 190464, 00:09:23.183 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:23.183 "assigned_rate_limits": { 00:09:23.183 "rw_ios_per_sec": 0, 00:09:23.183 "rw_mbytes_per_sec": 0, 00:09:23.183 "r_mbytes_per_sec": 0, 00:09:23.183 "w_mbytes_per_sec": 0 00:09:23.183 }, 00:09:23.183 "claimed": false, 00:09:23.183 "zoned": false, 00:09:23.183 "supported_io_types": { 00:09:23.183 "read": true, 00:09:23.183 "write": true, 00:09:23.183 "unmap": true, 00:09:23.183 "flush": true, 00:09:23.183 "reset": true, 00:09:23.183 "nvme_admin": false, 00:09:23.183 "nvme_io": false, 00:09:23.183 "nvme_io_md": false, 00:09:23.183 "write_zeroes": true, 00:09:23.183 "zcopy": false, 00:09:23.183 "get_zone_info": false, 00:09:23.183 "zone_management": false, 00:09:23.183 "zone_append": false, 00:09:23.183 "compare": false, 00:09:23.183 "compare_and_write": false, 00:09:23.183 "abort": false, 00:09:23.183 "seek_hole": false, 00:09:23.183 "seek_data": false, 00:09:23.183 "copy": false, 00:09:23.183 "nvme_iov_md": false 00:09:23.183 }, 00:09:23.183 "memory_domains": [ 00:09:23.183 { 00:09:23.183 "dma_device_id": "system", 00:09:23.183 "dma_device_type": 1 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.183 "dma_device_type": 2 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "dma_device_id": "system", 00:09:23.183 "dma_device_type": 1 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.183 "dma_device_type": 2 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "dma_device_id": "system", 00:09:23.183 "dma_device_type": 1 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.183 "dma_device_type": 2 00:09:23.183 } 00:09:23.183 ], 00:09:23.183 "driver_specific": { 00:09:23.183 "raid": { 00:09:23.183 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:23.183 "strip_size_kb": 64, 00:09:23.183 "state": "online", 00:09:23.183 "raid_level": "concat", 00:09:23.183 "superblock": true, 00:09:23.183 "num_base_bdevs": 3, 00:09:23.183 "num_base_bdevs_discovered": 3, 00:09:23.183 "num_base_bdevs_operational": 3, 00:09:23.183 "base_bdevs_list": [ 00:09:23.183 { 00:09:23.183 "name": "pt1", 00:09:23.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.183 "is_configured": true, 00:09:23.183 "data_offset": 2048, 00:09:23.183 "data_size": 63488 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "name": "pt2", 00:09:23.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.183 "is_configured": true, 00:09:23.183 "data_offset": 2048, 00:09:23.183 "data_size": 63488 00:09:23.183 }, 00:09:23.183 { 00:09:23.183 "name": "pt3", 00:09:23.183 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.183 "is_configured": true, 00:09:23.183 "data_offset": 2048, 00:09:23.183 "data_size": 63488 00:09:23.183 } 00:09:23.183 ] 00:09:23.183 } 00:09:23.183 } 00:09:23.183 }' 00:09:23.183 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.444 pt2 00:09:23.444 pt3' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:23.444 [2024-11-28 18:49:52.971286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90a8c320-2fc2-48c6-b879-3d77195e59a4 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 90a8c320-2fc2-48c6-b879-3d77195e59a4 ']' 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 [2024-11-28 18:49:52.999013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.444 [2024-11-28 18:49:52.999037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.444 [2024-11-28 18:49:52.999112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.444 [2024-11-28 18:49:52.999183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.444 [2024-11-28 18:49:52.999192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:23.444 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.704 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 [2024-11-28 18:49:53.143094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:23.704 [2024-11-28 18:49:53.144979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:23.704 [2024-11-28 18:49:53.145065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:23.704 [2024-11-28 18:49:53.145140] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:23.704 [2024-11-28 18:49:53.145219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:23.704 [2024-11-28 18:49:53.145270] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:23.704 [2024-11-28 18:49:53.145327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.704 [2024-11-28 18:49:53.145356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:23.704 request: 00:09:23.704 { 00:09:23.704 "name": "raid_bdev1", 00:09:23.705 "raid_level": "concat", 00:09:23.705 "base_bdevs": [ 00:09:23.705 "malloc1", 00:09:23.705 "malloc2", 00:09:23.705 "malloc3" 00:09:23.705 ], 00:09:23.705 "strip_size_kb": 64, 00:09:23.705 "superblock": false, 00:09:23.705 "method": "bdev_raid_create", 00:09:23.705 "req_id": 1 00:09:23.705 } 00:09:23.705 Got JSON-RPC error response 00:09:23.705 response: 00:09:23.705 { 00:09:23.705 "code": -17, 00:09:23.705 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:23.705 } 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.705 [2024-11-28 18:49:53.207068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.705 [2024-11-28 18:49:53.207167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.705 [2024-11-28 18:49:53.207204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:23.705 [2024-11-28 18:49:53.207232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.705 [2024-11-28 18:49:53.209348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.705 [2024-11-28 18:49:53.209416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.705 [2024-11-28 18:49:53.209515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:23.705 [2024-11-28 18:49:53.209580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.705 pt1 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.705 "name": "raid_bdev1", 00:09:23.705 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:23.705 "strip_size_kb": 64, 00:09:23.705 "state": "configuring", 00:09:23.705 "raid_level": "concat", 00:09:23.705 "superblock": true, 00:09:23.705 "num_base_bdevs": 3, 00:09:23.705 "num_base_bdevs_discovered": 1, 00:09:23.705 "num_base_bdevs_operational": 3, 00:09:23.705 "base_bdevs_list": [ 00:09:23.705 { 00:09:23.705 "name": "pt1", 00:09:23.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.705 "is_configured": true, 00:09:23.705 "data_offset": 2048, 00:09:23.705 "data_size": 63488 00:09:23.705 }, 00:09:23.705 { 00:09:23.705 "name": null, 00:09:23.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.705 "is_configured": false, 00:09:23.705 "data_offset": 2048, 00:09:23.705 "data_size": 63488 00:09:23.705 }, 00:09:23.705 { 00:09:23.705 "name": null, 00:09:23.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.705 "is_configured": false, 00:09:23.705 "data_offset": 2048, 00:09:23.705 "data_size": 63488 00:09:23.705 } 00:09:23.705 ] 00:09:23.705 }' 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.705 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.964 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:23.964 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.964 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.964 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.244 [2024-11-28 18:49:53.571206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.245 [2024-11-28 18:49:53.571303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.245 [2024-11-28 18:49:53.571331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:24.245 [2024-11-28 18:49:53.571340] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.245 [2024-11-28 18:49:53.571731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.245 [2024-11-28 18:49:53.571751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.245 [2024-11-28 18:49:53.571815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.245 [2024-11-28 18:49:53.571841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.245 pt2 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.245 [2024-11-28 18:49:53.583256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.245 "name": "raid_bdev1", 00:09:24.245 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:24.245 "strip_size_kb": 64, 00:09:24.245 "state": "configuring", 00:09:24.245 "raid_level": "concat", 00:09:24.245 "superblock": true, 00:09:24.245 "num_base_bdevs": 3, 00:09:24.245 "num_base_bdevs_discovered": 1, 00:09:24.245 "num_base_bdevs_operational": 3, 00:09:24.245 "base_bdevs_list": [ 00:09:24.245 { 00:09:24.245 "name": "pt1", 00:09:24.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.245 "is_configured": true, 00:09:24.245 "data_offset": 2048, 00:09:24.245 "data_size": 63488 00:09:24.245 }, 00:09:24.245 { 00:09:24.245 "name": null, 00:09:24.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.245 "is_configured": false, 00:09:24.245 "data_offset": 0, 00:09:24.245 "data_size": 63488 00:09:24.245 }, 00:09:24.245 { 00:09:24.245 "name": null, 00:09:24.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.245 "is_configured": false, 00:09:24.245 "data_offset": 2048, 00:09:24.245 "data_size": 63488 00:09:24.245 } 00:09:24.245 ] 00:09:24.245 }' 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.245 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.505 [2024-11-28 18:49:53.975336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.505 [2024-11-28 18:49:53.975447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.505 [2024-11-28 18:49:53.975481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:24.505 [2024-11-28 18:49:53.975510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.505 [2024-11-28 18:49:53.975909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.505 [2024-11-28 18:49:53.975966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.505 [2024-11-28 18:49:53.976052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.505 [2024-11-28 18:49:53.976101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.505 pt2 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.505 [2024-11-28 18:49:53.983312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.505 [2024-11-28 18:49:53.983397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.505 [2024-11-28 18:49:53.983444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:24.505 [2024-11-28 18:49:53.983473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.505 [2024-11-28 18:49:53.983792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.505 [2024-11-28 18:49:53.983848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.505 [2024-11-28 18:49:53.983921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:24.505 [2024-11-28 18:49:53.983971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.505 [2024-11-28 18:49:53.984079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:24.505 [2024-11-28 18:49:53.984117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.505 [2024-11-28 18:49:53.984369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:24.505 [2024-11-28 18:49:53.984523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:24.505 [2024-11-28 18:49:53.984561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:24.505 [2024-11-28 18:49:53.984693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.505 pt3 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.505 18:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.505 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.505 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.505 "name": "raid_bdev1", 00:09:24.505 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:24.505 "strip_size_kb": 64, 00:09:24.505 "state": "online", 00:09:24.505 "raid_level": "concat", 00:09:24.505 "superblock": true, 00:09:24.505 "num_base_bdevs": 3, 00:09:24.505 "num_base_bdevs_discovered": 3, 00:09:24.505 "num_base_bdevs_operational": 3, 00:09:24.505 "base_bdevs_list": [ 00:09:24.505 { 00:09:24.505 "name": "pt1", 00:09:24.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.505 "is_configured": true, 00:09:24.505 "data_offset": 2048, 00:09:24.505 "data_size": 63488 00:09:24.505 }, 00:09:24.505 { 00:09:24.505 "name": "pt2", 00:09:24.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.505 "is_configured": true, 00:09:24.505 "data_offset": 2048, 00:09:24.505 "data_size": 63488 00:09:24.505 }, 00:09:24.505 { 00:09:24.505 "name": "pt3", 00:09:24.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.505 "is_configured": true, 00:09:24.505 "data_offset": 2048, 00:09:24.505 "data_size": 63488 00:09:24.505 } 00:09:24.505 ] 00:09:24.505 }' 00:09:24.505 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.505 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.074 [2024-11-28 18:49:54.411710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.074 "name": "raid_bdev1", 00:09:25.074 "aliases": [ 00:09:25.074 "90a8c320-2fc2-48c6-b879-3d77195e59a4" 00:09:25.074 ], 00:09:25.074 "product_name": "Raid Volume", 00:09:25.074 "block_size": 512, 00:09:25.074 "num_blocks": 190464, 00:09:25.074 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:25.074 "assigned_rate_limits": { 00:09:25.074 "rw_ios_per_sec": 0, 00:09:25.074 "rw_mbytes_per_sec": 0, 00:09:25.074 "r_mbytes_per_sec": 0, 00:09:25.074 "w_mbytes_per_sec": 0 00:09:25.074 }, 00:09:25.074 "claimed": false, 00:09:25.074 "zoned": false, 00:09:25.074 "supported_io_types": { 00:09:25.074 "read": true, 00:09:25.074 "write": true, 00:09:25.074 "unmap": true, 00:09:25.074 "flush": true, 00:09:25.074 "reset": true, 00:09:25.074 "nvme_admin": false, 00:09:25.074 "nvme_io": false, 00:09:25.074 "nvme_io_md": false, 00:09:25.074 "write_zeroes": true, 00:09:25.074 "zcopy": false, 00:09:25.074 "get_zone_info": false, 00:09:25.074 "zone_management": false, 00:09:25.074 "zone_append": false, 00:09:25.074 "compare": false, 00:09:25.074 "compare_and_write": false, 00:09:25.074 "abort": false, 00:09:25.074 "seek_hole": false, 00:09:25.074 "seek_data": false, 00:09:25.074 "copy": false, 00:09:25.074 "nvme_iov_md": false 00:09:25.074 }, 00:09:25.074 "memory_domains": [ 00:09:25.074 { 00:09:25.074 "dma_device_id": "system", 00:09:25.074 "dma_device_type": 1 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.074 "dma_device_type": 2 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "dma_device_id": "system", 00:09:25.074 "dma_device_type": 1 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.074 "dma_device_type": 2 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "dma_device_id": "system", 00:09:25.074 "dma_device_type": 1 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.074 "dma_device_type": 2 00:09:25.074 } 00:09:25.074 ], 00:09:25.074 "driver_specific": { 00:09:25.074 "raid": { 00:09:25.074 "uuid": "90a8c320-2fc2-48c6-b879-3d77195e59a4", 00:09:25.074 "strip_size_kb": 64, 00:09:25.074 "state": "online", 00:09:25.074 "raid_level": "concat", 00:09:25.074 "superblock": true, 00:09:25.074 "num_base_bdevs": 3, 00:09:25.074 "num_base_bdevs_discovered": 3, 00:09:25.074 "num_base_bdevs_operational": 3, 00:09:25.074 "base_bdevs_list": [ 00:09:25.074 { 00:09:25.074 "name": "pt1", 00:09:25.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.074 "is_configured": true, 00:09:25.074 "data_offset": 2048, 00:09:25.074 "data_size": 63488 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "name": "pt2", 00:09:25.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.074 "is_configured": true, 00:09:25.074 "data_offset": 2048, 00:09:25.074 "data_size": 63488 00:09:25.074 }, 00:09:25.074 { 00:09:25.074 "name": "pt3", 00:09:25.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.074 "is_configured": true, 00:09:25.074 "data_offset": 2048, 00:09:25.074 "data_size": 63488 00:09:25.074 } 00:09:25.074 ] 00:09:25.074 } 00:09:25.074 } 00:09:25.074 }' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.074 pt2 00:09:25.074 pt3' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.074 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.334 [2024-11-28 18:49:54.699790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 90a8c320-2fc2-48c6-b879-3d77195e59a4 '!=' 90a8c320-2fc2-48c6-b879-3d77195e59a4 ']' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79479 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79479 ']' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79479 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79479 00:09:25.334 killing process with pid 79479 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79479' 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79479 00:09:25.334 [2024-11-28 18:49:54.764757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.334 [2024-11-28 18:49:54.764837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.334 [2024-11-28 18:49:54.764894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.334 [2024-11-28 18:49:54.764905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:25.334 18:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79479 00:09:25.334 [2024-11-28 18:49:54.797431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.602 18:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:25.602 00:09:25.602 real 0m3.756s 00:09:25.602 user 0m5.885s 00:09:25.602 sys 0m0.786s 00:09:25.602 ************************************ 00:09:25.602 END TEST raid_superblock_test 00:09:25.602 ************************************ 00:09:25.602 18:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.602 18:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 18:49:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:25.602 18:49:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.602 18:49:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.602 18:49:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 ************************************ 00:09:25.602 START TEST raid_read_error_test 00:09:25.602 ************************************ 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.602 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ouWSKkXA4w 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79721 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79721 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79721 ']' 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.603 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.604 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.604 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.604 [2024-11-28 18:49:55.179603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:25.604 [2024-11-28 18:49:55.179809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79721 ] 00:09:25.863 [2024-11-28 18:49:55.313048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:25.863 [2024-11-28 18:49:55.332766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.863 [2024-11-28 18:49:55.357149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.863 [2024-11-28 18:49:55.398844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.863 [2024-11-28 18:49:55.398956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.433 18:49:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.433 BaseBdev1_malloc 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.433 true 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.433 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.433 [2024-11-28 18:49:56.034845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.433 [2024-11-28 18:49:56.034907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.433 [2024-11-28 18:49:56.034923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.433 [2024-11-28 18:49:56.034935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.769 [2024-11-28 18:49:56.037018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.769 [2024-11-28 18:49:56.037119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.769 BaseBdev1 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 BaseBdev2_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 true 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 [2024-11-28 18:49:56.075349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.769 [2024-11-28 18:49:56.075444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.769 [2024-11-28 18:49:56.075464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.769 [2024-11-28 18:49:56.075474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.769 [2024-11-28 18:49:56.077510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.769 [2024-11-28 18:49:56.077546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.769 BaseBdev2 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 BaseBdev3_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 true 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 [2024-11-28 18:49:56.115751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.769 [2024-11-28 18:49:56.115799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.769 [2024-11-28 18:49:56.115815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.769 [2024-11-28 18:49:56.115824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.769 [2024-11-28 18:49:56.117814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.769 [2024-11-28 18:49:56.117851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.769 BaseBdev3 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 [2024-11-28 18:49:56.127819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.769 [2024-11-28 18:49:56.129594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.769 [2024-11-28 18:49:56.129662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.769 [2024-11-28 18:49:56.129826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.769 [2024-11-28 18:49:56.129838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.769 [2024-11-28 18:49:56.130078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:26.769 [2024-11-28 18:49:56.130220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.769 [2024-11-28 18:49:56.130232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.769 [2024-11-28 18:49:56.130353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.769 "name": "raid_bdev1", 00:09:26.769 "uuid": "30031887-720f-4a70-b15a-dd6f65171d75", 00:09:26.769 "strip_size_kb": 64, 00:09:26.769 "state": "online", 00:09:26.769 "raid_level": "concat", 00:09:26.769 "superblock": true, 00:09:26.769 "num_base_bdevs": 3, 00:09:26.769 "num_base_bdevs_discovered": 3, 00:09:26.769 "num_base_bdevs_operational": 3, 00:09:26.769 "base_bdevs_list": [ 00:09:26.769 { 00:09:26.769 "name": "BaseBdev1", 00:09:26.769 "uuid": "c1e09bfc-bf18-59aa-8aaf-eb4540973b63", 00:09:26.769 "is_configured": true, 00:09:26.769 "data_offset": 2048, 00:09:26.769 "data_size": 63488 00:09:26.769 }, 00:09:26.769 { 00:09:26.769 "name": "BaseBdev2", 00:09:26.769 "uuid": "8c39a562-16b5-5f18-b0f0-db845bd2791e", 00:09:26.769 "is_configured": true, 00:09:26.769 "data_offset": 2048, 00:09:26.769 "data_size": 63488 00:09:26.769 }, 00:09:26.769 { 00:09:26.769 "name": "BaseBdev3", 00:09:26.769 "uuid": "80f09600-849a-5c80-9d9f-a9437d477b65", 00:09:26.769 "is_configured": true, 00:09:26.769 "data_offset": 2048, 00:09:26.769 "data_size": 63488 00:09:26.769 } 00:09:26.769 ] 00:09:26.769 }' 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.769 18:49:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.036 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.036 18:49:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.296 [2024-11-28 18:49:56.664318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.233 "name": "raid_bdev1", 00:09:28.233 "uuid": "30031887-720f-4a70-b15a-dd6f65171d75", 00:09:28.233 "strip_size_kb": 64, 00:09:28.233 "state": "online", 00:09:28.233 "raid_level": "concat", 00:09:28.233 "superblock": true, 00:09:28.233 "num_base_bdevs": 3, 00:09:28.233 "num_base_bdevs_discovered": 3, 00:09:28.233 "num_base_bdevs_operational": 3, 00:09:28.233 "base_bdevs_list": [ 00:09:28.233 { 00:09:28.233 "name": "BaseBdev1", 00:09:28.233 "uuid": "c1e09bfc-bf18-59aa-8aaf-eb4540973b63", 00:09:28.233 "is_configured": true, 00:09:28.233 "data_offset": 2048, 00:09:28.233 "data_size": 63488 00:09:28.233 }, 00:09:28.233 { 00:09:28.233 "name": "BaseBdev2", 00:09:28.233 "uuid": "8c39a562-16b5-5f18-b0f0-db845bd2791e", 00:09:28.233 "is_configured": true, 00:09:28.233 "data_offset": 2048, 00:09:28.233 "data_size": 63488 00:09:28.233 }, 00:09:28.233 { 00:09:28.233 "name": "BaseBdev3", 00:09:28.233 "uuid": "80f09600-849a-5c80-9d9f-a9437d477b65", 00:09:28.233 "is_configured": true, 00:09:28.233 "data_offset": 2048, 00:09:28.233 "data_size": 63488 00:09:28.233 } 00:09:28.233 ] 00:09:28.233 }' 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.233 18:49:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.493 [2024-11-28 18:49:58.074812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.493 [2024-11-28 18:49:58.074915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.493 [2024-11-28 18:49:58.077462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.493 [2024-11-28 18:49:58.077508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.493 [2024-11-28 18:49:58.077544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.493 [2024-11-28 18:49:58.077553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:28.493 { 00:09:28.493 "results": [ 00:09:28.493 { 00:09:28.493 "job": "raid_bdev1", 00:09:28.493 "core_mask": "0x1", 00:09:28.493 "workload": "randrw", 00:09:28.493 "percentage": 50, 00:09:28.493 "status": "finished", 00:09:28.493 "queue_depth": 1, 00:09:28.493 "io_size": 131072, 00:09:28.493 "runtime": 1.408697, 00:09:28.493 "iops": 16956.804763550997, 00:09:28.493 "mibps": 2119.6005954438747, 00:09:28.493 "io_failed": 1, 00:09:28.493 "io_timeout": 0, 00:09:28.493 "avg_latency_us": 81.45542442202022, 00:09:28.493 "min_latency_us": 24.76771550597054, 00:09:28.493 "max_latency_us": 1399.4874923733985 00:09:28.493 } 00:09:28.493 ], 00:09:28.493 "core_count": 1 00:09:28.493 } 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79721 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79721 ']' 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79721 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.493 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79721 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79721' 00:09:28.753 killing process with pid 79721 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79721 00:09:28.753 [2024-11-28 18:49:58.114899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79721 00:09:28.753 [2024-11-28 18:49:58.140374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ouWSKkXA4w 00:09:28.753 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.012 ************************************ 00:09:29.012 END TEST raid_read_error_test 00:09:29.012 ************************************ 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:29.012 00:09:29.012 real 0m3.279s 00:09:29.012 user 0m4.177s 00:09:29.012 sys 0m0.486s 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.012 18:49:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.012 18:49:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:29.012 18:49:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:29.012 18:49:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.012 18:49:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.012 ************************************ 00:09:29.012 START TEST raid_write_error_test 00:09:29.012 ************************************ 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dLI45zDNES 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79850 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79850 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 79850 ']' 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.012 18:49:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.012 [2024-11-28 18:49:58.532261] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:29.013 [2024-11-28 18:49:58.532383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79850 ] 00:09:29.272 [2024-11-28 18:49:58.666572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:29.272 [2024-11-28 18:49:58.703201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.272 [2024-11-28 18:49:58.728008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.272 [2024-11-28 18:49:58.769749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.272 [2024-11-28 18:49:58.769873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 BaseBdev1_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 true 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 [2024-11-28 18:49:59.362128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.841 [2024-11-28 18:49:59.362182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.841 [2024-11-28 18:49:59.362197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.841 [2024-11-28 18:49:59.362209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.841 [2024-11-28 18:49:59.364283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.841 [2024-11-28 18:49:59.364324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.841 BaseBdev1 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 BaseBdev2_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 true 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 [2024-11-28 18:49:59.402517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.841 [2024-11-28 18:49:59.402562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.841 [2024-11-28 18:49:59.402576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.841 [2024-11-28 18:49:59.402586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.841 [2024-11-28 18:49:59.404606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.841 [2024-11-28 18:49:59.404695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.841 BaseBdev2 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 BaseBdev3_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.841 true 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:29.841 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.842 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.842 [2024-11-28 18:49:59.443121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:29.842 [2024-11-28 18:49:59.443176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.842 [2024-11-28 18:49:59.443193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:29.842 [2024-11-28 18:49:59.443205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.102 [2024-11-28 18:49:59.445236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.102 [2024-11-28 18:49:59.445276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:30.102 BaseBdev3 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.102 [2024-11-28 18:49:59.455227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.102 [2024-11-28 18:49:59.457022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.102 [2024-11-28 18:49:59.457085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.102 [2024-11-28 18:49:59.457241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.102 [2024-11-28 18:49:59.457257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.102 [2024-11-28 18:49:59.457514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:30.102 [2024-11-28 18:49:59.457669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.102 [2024-11-28 18:49:59.457687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:30.102 [2024-11-28 18:49:59.457794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.102 "name": "raid_bdev1", 00:09:30.102 "uuid": "80934d4e-9ee5-4a02-98cf-e0f159960c7f", 00:09:30.102 "strip_size_kb": 64, 00:09:30.102 "state": "online", 00:09:30.102 "raid_level": "concat", 00:09:30.102 "superblock": true, 00:09:30.102 "num_base_bdevs": 3, 00:09:30.102 "num_base_bdevs_discovered": 3, 00:09:30.102 "num_base_bdevs_operational": 3, 00:09:30.102 "base_bdevs_list": [ 00:09:30.102 { 00:09:30.102 "name": "BaseBdev1", 00:09:30.102 "uuid": "60f3a396-fa27-51ee-a5ba-6af60daa82b4", 00:09:30.102 "is_configured": true, 00:09:30.102 "data_offset": 2048, 00:09:30.102 "data_size": 63488 00:09:30.102 }, 00:09:30.102 { 00:09:30.102 "name": "BaseBdev2", 00:09:30.102 "uuid": "84d797a5-c3e5-5b19-9790-eea06d0b3e8f", 00:09:30.102 "is_configured": true, 00:09:30.102 "data_offset": 2048, 00:09:30.102 "data_size": 63488 00:09:30.102 }, 00:09:30.102 { 00:09:30.102 "name": "BaseBdev3", 00:09:30.102 "uuid": "6be2a23c-ab52-582f-b092-d55ad03c1597", 00:09:30.102 "is_configured": true, 00:09:30.102 "data_offset": 2048, 00:09:30.102 "data_size": 63488 00:09:30.102 } 00:09:30.102 ] 00:09:30.102 }' 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.102 18:49:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.361 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.361 18:49:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.361 [2024-11-28 18:49:59.955744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.300 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.559 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.560 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.560 "name": "raid_bdev1", 00:09:31.560 "uuid": "80934d4e-9ee5-4a02-98cf-e0f159960c7f", 00:09:31.560 "strip_size_kb": 64, 00:09:31.560 "state": "online", 00:09:31.560 "raid_level": "concat", 00:09:31.560 "superblock": true, 00:09:31.560 "num_base_bdevs": 3, 00:09:31.560 "num_base_bdevs_discovered": 3, 00:09:31.560 "num_base_bdevs_operational": 3, 00:09:31.560 "base_bdevs_list": [ 00:09:31.560 { 00:09:31.560 "name": "BaseBdev1", 00:09:31.560 "uuid": "60f3a396-fa27-51ee-a5ba-6af60daa82b4", 00:09:31.560 "is_configured": true, 00:09:31.560 "data_offset": 2048, 00:09:31.560 "data_size": 63488 00:09:31.560 }, 00:09:31.560 { 00:09:31.560 "name": "BaseBdev2", 00:09:31.560 "uuid": "84d797a5-c3e5-5b19-9790-eea06d0b3e8f", 00:09:31.560 "is_configured": true, 00:09:31.560 "data_offset": 2048, 00:09:31.560 "data_size": 63488 00:09:31.560 }, 00:09:31.560 { 00:09:31.560 "name": "BaseBdev3", 00:09:31.560 "uuid": "6be2a23c-ab52-582f-b092-d55ad03c1597", 00:09:31.560 "is_configured": true, 00:09:31.560 "data_offset": 2048, 00:09:31.560 "data_size": 63488 00:09:31.560 } 00:09:31.560 ] 00:09:31.560 }' 00:09:31.560 18:50:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.560 18:50:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.821 [2024-11-28 18:50:01.338201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.821 [2024-11-28 18:50:01.338301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.821 [2024-11-28 18:50:01.340800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.821 [2024-11-28 18:50:01.340845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.821 [2024-11-28 18:50:01.340881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.821 [2024-11-28 18:50:01.340900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.821 { 00:09:31.821 "results": [ 00:09:31.821 { 00:09:31.821 "job": "raid_bdev1", 00:09:31.821 "core_mask": "0x1", 00:09:31.821 "workload": "randrw", 00:09:31.821 "percentage": 50, 00:09:31.821 "status": "finished", 00:09:31.821 "queue_depth": 1, 00:09:31.821 "io_size": 131072, 00:09:31.821 "runtime": 1.38065, 00:09:31.821 "iops": 17005.033860862637, 00:09:31.821 "mibps": 2125.6292326078296, 00:09:31.821 "io_failed": 1, 00:09:31.821 "io_timeout": 0, 00:09:31.821 "avg_latency_us": 81.22205241447311, 00:09:31.821 "min_latency_us": 24.656149219907608, 00:09:31.821 "max_latency_us": 1356.646038525233 00:09:31.821 } 00:09:31.821 ], 00:09:31.821 "core_count": 1 00:09:31.821 } 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79850 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 79850 ']' 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 79850 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79850 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79850' 00:09:31.821 killing process with pid 79850 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 79850 00:09:31.821 [2024-11-28 18:50:01.375617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.821 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 79850 00:09:31.821 [2024-11-28 18:50:01.401580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.081 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dLI45zDNES 00:09:32.081 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:32.082 ************************************ 00:09:32.082 END TEST raid_write_error_test 00:09:32.082 ************************************ 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:32.082 00:09:32.082 real 0m3.183s 00:09:32.082 user 0m4.036s 00:09:32.082 sys 0m0.497s 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.082 18:50:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.082 18:50:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.082 18:50:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:32.082 18:50:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.082 18:50:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.082 18:50:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 ************************************ 00:09:32.341 START TEST raid_state_function_test 00:09:32.341 ************************************ 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.341 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79977 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79977' 00:09:32.342 Process raid pid: 79977 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79977 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79977 ']' 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.342 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.342 [2024-11-28 18:50:01.782491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:32.342 [2024-11-28 18:50:01.782684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.342 [2024-11-28 18:50:01.917773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:32.601 [2024-11-28 18:50:01.954149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.601 [2024-11-28 18:50:01.979085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.601 [2024-11-28 18:50:02.021058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.601 [2024-11-28 18:50:02.021171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.170 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.170 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.170 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.170 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.170 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.171 [2024-11-28 18:50:02.600953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.171 [2024-11-28 18:50:02.601059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.171 [2024-11-28 18:50:02.601093] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.171 [2024-11-28 18:50:02.601115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.171 [2024-11-28 18:50:02.601185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.171 [2024-11-28 18:50:02.601206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.171 "name": "Existed_Raid", 00:09:33.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.171 "strip_size_kb": 0, 00:09:33.171 "state": "configuring", 00:09:33.171 "raid_level": "raid1", 00:09:33.171 "superblock": false, 00:09:33.171 "num_base_bdevs": 3, 00:09:33.171 "num_base_bdevs_discovered": 0, 00:09:33.171 "num_base_bdevs_operational": 3, 00:09:33.171 "base_bdevs_list": [ 00:09:33.171 { 00:09:33.171 "name": "BaseBdev1", 00:09:33.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.171 "is_configured": false, 00:09:33.171 "data_offset": 0, 00:09:33.171 "data_size": 0 00:09:33.171 }, 00:09:33.171 { 00:09:33.171 "name": "BaseBdev2", 00:09:33.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.171 "is_configured": false, 00:09:33.171 "data_offset": 0, 00:09:33.171 "data_size": 0 00:09:33.171 }, 00:09:33.171 { 00:09:33.171 "name": "BaseBdev3", 00:09:33.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.171 "is_configured": false, 00:09:33.171 "data_offset": 0, 00:09:33.171 "data_size": 0 00:09:33.171 } 00:09:33.171 ] 00:09:33.171 }' 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.171 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.431 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.431 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.431 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 [2024-11-28 18:50:03.036959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.691 [2024-11-28 18:50:03.037034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 [2024-11-28 18:50:03.048987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.691 [2024-11-28 18:50:03.049078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.691 [2024-11-28 18:50:03.049108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.691 [2024-11-28 18:50:03.049128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.691 [2024-11-28 18:50:03.049147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.691 [2024-11-28 18:50:03.049167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 [2024-11-28 18:50:03.070227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.691 BaseBdev1 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 [ 00:09:33.691 { 00:09:33.691 "name": "BaseBdev1", 00:09:33.691 "aliases": [ 00:09:33.691 "fda2ecd2-d082-4b2f-8c21-1566df48020b" 00:09:33.691 ], 00:09:33.691 "product_name": "Malloc disk", 00:09:33.691 "block_size": 512, 00:09:33.691 "num_blocks": 65536, 00:09:33.691 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:33.691 "assigned_rate_limits": { 00:09:33.691 "rw_ios_per_sec": 0, 00:09:33.691 "rw_mbytes_per_sec": 0, 00:09:33.691 "r_mbytes_per_sec": 0, 00:09:33.691 "w_mbytes_per_sec": 0 00:09:33.691 }, 00:09:33.691 "claimed": true, 00:09:33.691 "claim_type": "exclusive_write", 00:09:33.691 "zoned": false, 00:09:33.691 "supported_io_types": { 00:09:33.691 "read": true, 00:09:33.691 "write": true, 00:09:33.691 "unmap": true, 00:09:33.691 "flush": true, 00:09:33.691 "reset": true, 00:09:33.691 "nvme_admin": false, 00:09:33.691 "nvme_io": false, 00:09:33.691 "nvme_io_md": false, 00:09:33.691 "write_zeroes": true, 00:09:33.691 "zcopy": true, 00:09:33.691 "get_zone_info": false, 00:09:33.691 "zone_management": false, 00:09:33.691 "zone_append": false, 00:09:33.691 "compare": false, 00:09:33.691 "compare_and_write": false, 00:09:33.691 "abort": true, 00:09:33.691 "seek_hole": false, 00:09:33.691 "seek_data": false, 00:09:33.691 "copy": true, 00:09:33.691 "nvme_iov_md": false 00:09:33.691 }, 00:09:33.691 "memory_domains": [ 00:09:33.691 { 00:09:33.691 "dma_device_id": "system", 00:09:33.691 "dma_device_type": 1 00:09:33.691 }, 00:09:33.691 { 00:09:33.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.691 "dma_device_type": 2 00:09:33.691 } 00:09:33.691 ], 00:09:33.691 "driver_specific": {} 00:09:33.691 } 00:09:33.691 ] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.691 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.691 "name": "Existed_Raid", 00:09:33.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.691 "strip_size_kb": 0, 00:09:33.691 "state": "configuring", 00:09:33.692 "raid_level": "raid1", 00:09:33.692 "superblock": false, 00:09:33.692 "num_base_bdevs": 3, 00:09:33.692 "num_base_bdevs_discovered": 1, 00:09:33.692 "num_base_bdevs_operational": 3, 00:09:33.692 "base_bdevs_list": [ 00:09:33.692 { 00:09:33.692 "name": "BaseBdev1", 00:09:33.692 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:33.692 "is_configured": true, 00:09:33.692 "data_offset": 0, 00:09:33.692 "data_size": 65536 00:09:33.692 }, 00:09:33.692 { 00:09:33.692 "name": "BaseBdev2", 00:09:33.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.692 "is_configured": false, 00:09:33.692 "data_offset": 0, 00:09:33.692 "data_size": 0 00:09:33.692 }, 00:09:33.692 { 00:09:33.692 "name": "BaseBdev3", 00:09:33.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.692 "is_configured": false, 00:09:33.692 "data_offset": 0, 00:09:33.692 "data_size": 0 00:09:33.692 } 00:09:33.692 ] 00:09:33.692 }' 00:09:33.692 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.692 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.262 [2024-11-28 18:50:03.578386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.262 [2024-11-28 18:50:03.578495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.262 [2024-11-28 18:50:03.590436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.262 [2024-11-28 18:50:03.592248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.262 [2024-11-28 18:50:03.592321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.262 [2024-11-28 18:50:03.592362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.262 [2024-11-28 18:50:03.592385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.262 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.263 "name": "Existed_Raid", 00:09:34.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.263 "strip_size_kb": 0, 00:09:34.263 "state": "configuring", 00:09:34.263 "raid_level": "raid1", 00:09:34.263 "superblock": false, 00:09:34.263 "num_base_bdevs": 3, 00:09:34.263 "num_base_bdevs_discovered": 1, 00:09:34.263 "num_base_bdevs_operational": 3, 00:09:34.263 "base_bdevs_list": [ 00:09:34.263 { 00:09:34.263 "name": "BaseBdev1", 00:09:34.263 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:34.263 "is_configured": true, 00:09:34.263 "data_offset": 0, 00:09:34.263 "data_size": 65536 00:09:34.263 }, 00:09:34.263 { 00:09:34.263 "name": "BaseBdev2", 00:09:34.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.263 "is_configured": false, 00:09:34.263 "data_offset": 0, 00:09:34.263 "data_size": 0 00:09:34.263 }, 00:09:34.263 { 00:09:34.263 "name": "BaseBdev3", 00:09:34.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.263 "is_configured": false, 00:09:34.263 "data_offset": 0, 00:09:34.263 "data_size": 0 00:09:34.263 } 00:09:34.263 ] 00:09:34.263 }' 00:09:34.263 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.263 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 BaseBdev2 00:09:34.523 [2024-11-28 18:50:03.989453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.523 18:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 [ 00:09:34.523 { 00:09:34.523 "name": "BaseBdev2", 00:09:34.523 "aliases": [ 00:09:34.523 "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02" 00:09:34.523 ], 00:09:34.523 "product_name": "Malloc disk", 00:09:34.523 "block_size": 512, 00:09:34.523 "num_blocks": 65536, 00:09:34.523 "uuid": "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02", 00:09:34.523 "assigned_rate_limits": { 00:09:34.523 "rw_ios_per_sec": 0, 00:09:34.523 "rw_mbytes_per_sec": 0, 00:09:34.523 "r_mbytes_per_sec": 0, 00:09:34.523 "w_mbytes_per_sec": 0 00:09:34.523 }, 00:09:34.523 "claimed": true, 00:09:34.523 "claim_type": "exclusive_write", 00:09:34.523 "zoned": false, 00:09:34.523 "supported_io_types": { 00:09:34.523 "read": true, 00:09:34.523 "write": true, 00:09:34.523 "unmap": true, 00:09:34.523 "flush": true, 00:09:34.523 "reset": true, 00:09:34.523 "nvme_admin": false, 00:09:34.523 "nvme_io": false, 00:09:34.523 "nvme_io_md": false, 00:09:34.523 "write_zeroes": true, 00:09:34.523 "zcopy": true, 00:09:34.523 "get_zone_info": false, 00:09:34.523 "zone_management": false, 00:09:34.523 "zone_append": false, 00:09:34.523 "compare": false, 00:09:34.523 "compare_and_write": false, 00:09:34.523 "abort": true, 00:09:34.523 "seek_hole": false, 00:09:34.523 "seek_data": false, 00:09:34.523 "copy": true, 00:09:34.523 "nvme_iov_md": false 00:09:34.523 }, 00:09:34.523 "memory_domains": [ 00:09:34.523 { 00:09:34.523 "dma_device_id": "system", 00:09:34.523 "dma_device_type": 1 00:09:34.523 }, 00:09:34.523 { 00:09:34.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.523 "dma_device_type": 2 00:09:34.523 } 00:09:34.523 ], 00:09:34.523 "driver_specific": {} 00:09:34.523 } 00:09:34.523 ] 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.523 "name": "Existed_Raid", 00:09:34.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.523 "strip_size_kb": 0, 00:09:34.523 "state": "configuring", 00:09:34.523 "raid_level": "raid1", 00:09:34.523 "superblock": false, 00:09:34.523 "num_base_bdevs": 3, 00:09:34.523 "num_base_bdevs_discovered": 2, 00:09:34.523 "num_base_bdevs_operational": 3, 00:09:34.523 "base_bdevs_list": [ 00:09:34.523 { 00:09:34.523 "name": "BaseBdev1", 00:09:34.523 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:34.523 "is_configured": true, 00:09:34.523 "data_offset": 0, 00:09:34.523 "data_size": 65536 00:09:34.523 }, 00:09:34.523 { 00:09:34.523 "name": "BaseBdev2", 00:09:34.523 "uuid": "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02", 00:09:34.523 "is_configured": true, 00:09:34.523 "data_offset": 0, 00:09:34.523 "data_size": 65536 00:09:34.523 }, 00:09:34.523 { 00:09:34.523 "name": "BaseBdev3", 00:09:34.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.523 "is_configured": false, 00:09:34.523 "data_offset": 0, 00:09:34.523 "data_size": 0 00:09:34.523 } 00:09:34.523 ] 00:09:34.523 }' 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.523 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.092 [2024-11-28 18:50:04.493601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.092 [2024-11-28 18:50:04.493701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:35.092 [2024-11-28 18:50:04.493714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.092 [2024-11-28 18:50:04.494014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:35.092 [2024-11-28 18:50:04.494158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:35.092 [2024-11-28 18:50:04.494175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:35.092 [2024-11-28 18:50:04.494416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.092 BaseBdev3 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.092 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.093 [ 00:09:35.093 { 00:09:35.093 "name": "BaseBdev3", 00:09:35.093 "aliases": [ 00:09:35.093 "90c7c5c8-0c84-41f6-a2a4-f419d01dd6a0" 00:09:35.093 ], 00:09:35.093 "product_name": "Malloc disk", 00:09:35.093 "block_size": 512, 00:09:35.093 "num_blocks": 65536, 00:09:35.093 "uuid": "90c7c5c8-0c84-41f6-a2a4-f419d01dd6a0", 00:09:35.093 "assigned_rate_limits": { 00:09:35.093 "rw_ios_per_sec": 0, 00:09:35.093 "rw_mbytes_per_sec": 0, 00:09:35.093 "r_mbytes_per_sec": 0, 00:09:35.093 "w_mbytes_per_sec": 0 00:09:35.093 }, 00:09:35.093 "claimed": true, 00:09:35.093 "claim_type": "exclusive_write", 00:09:35.093 "zoned": false, 00:09:35.093 "supported_io_types": { 00:09:35.093 "read": true, 00:09:35.093 "write": true, 00:09:35.093 "unmap": true, 00:09:35.093 "flush": true, 00:09:35.093 "reset": true, 00:09:35.093 "nvme_admin": false, 00:09:35.093 "nvme_io": false, 00:09:35.093 "nvme_io_md": false, 00:09:35.093 "write_zeroes": true, 00:09:35.093 "zcopy": true, 00:09:35.093 "get_zone_info": false, 00:09:35.093 "zone_management": false, 00:09:35.093 "zone_append": false, 00:09:35.093 "compare": false, 00:09:35.093 "compare_and_write": false, 00:09:35.093 "abort": true, 00:09:35.093 "seek_hole": false, 00:09:35.093 "seek_data": false, 00:09:35.093 "copy": true, 00:09:35.093 "nvme_iov_md": false 00:09:35.093 }, 00:09:35.093 "memory_domains": [ 00:09:35.093 { 00:09:35.093 "dma_device_id": "system", 00:09:35.093 "dma_device_type": 1 00:09:35.093 }, 00:09:35.093 { 00:09:35.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.093 "dma_device_type": 2 00:09:35.093 } 00:09:35.093 ], 00:09:35.093 "driver_specific": {} 00:09:35.093 } 00:09:35.093 ] 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.093 "name": "Existed_Raid", 00:09:35.093 "uuid": "67fc3713-4056-46d8-b881-ab8cb9ce4464", 00:09:35.093 "strip_size_kb": 0, 00:09:35.093 "state": "online", 00:09:35.093 "raid_level": "raid1", 00:09:35.093 "superblock": false, 00:09:35.093 "num_base_bdevs": 3, 00:09:35.093 "num_base_bdevs_discovered": 3, 00:09:35.093 "num_base_bdevs_operational": 3, 00:09:35.093 "base_bdevs_list": [ 00:09:35.093 { 00:09:35.093 "name": "BaseBdev1", 00:09:35.093 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:35.093 "is_configured": true, 00:09:35.093 "data_offset": 0, 00:09:35.093 "data_size": 65536 00:09:35.093 }, 00:09:35.093 { 00:09:35.093 "name": "BaseBdev2", 00:09:35.093 "uuid": "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02", 00:09:35.093 "is_configured": true, 00:09:35.093 "data_offset": 0, 00:09:35.093 "data_size": 65536 00:09:35.093 }, 00:09:35.093 { 00:09:35.093 "name": "BaseBdev3", 00:09:35.093 "uuid": "90c7c5c8-0c84-41f6-a2a4-f419d01dd6a0", 00:09:35.093 "is_configured": true, 00:09:35.093 "data_offset": 0, 00:09:35.093 "data_size": 65536 00:09:35.093 } 00:09:35.093 ] 00:09:35.093 }' 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.093 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.662 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.663 18:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.663 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 [2024-11-28 18:50:04.970047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.663 18:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.663 "name": "Existed_Raid", 00:09:35.663 "aliases": [ 00:09:35.663 "67fc3713-4056-46d8-b881-ab8cb9ce4464" 00:09:35.663 ], 00:09:35.663 "product_name": "Raid Volume", 00:09:35.663 "block_size": 512, 00:09:35.663 "num_blocks": 65536, 00:09:35.663 "uuid": "67fc3713-4056-46d8-b881-ab8cb9ce4464", 00:09:35.663 "assigned_rate_limits": { 00:09:35.663 "rw_ios_per_sec": 0, 00:09:35.663 "rw_mbytes_per_sec": 0, 00:09:35.663 "r_mbytes_per_sec": 0, 00:09:35.663 "w_mbytes_per_sec": 0 00:09:35.663 }, 00:09:35.663 "claimed": false, 00:09:35.663 "zoned": false, 00:09:35.663 "supported_io_types": { 00:09:35.663 "read": true, 00:09:35.663 "write": true, 00:09:35.663 "unmap": false, 00:09:35.663 "flush": false, 00:09:35.663 "reset": true, 00:09:35.663 "nvme_admin": false, 00:09:35.663 "nvme_io": false, 00:09:35.663 "nvme_io_md": false, 00:09:35.663 "write_zeroes": true, 00:09:35.663 "zcopy": false, 00:09:35.663 "get_zone_info": false, 00:09:35.663 "zone_management": false, 00:09:35.663 "zone_append": false, 00:09:35.663 "compare": false, 00:09:35.663 "compare_and_write": false, 00:09:35.663 "abort": false, 00:09:35.663 "seek_hole": false, 00:09:35.663 "seek_data": false, 00:09:35.663 "copy": false, 00:09:35.663 "nvme_iov_md": false 00:09:35.663 }, 00:09:35.663 "memory_domains": [ 00:09:35.663 { 00:09:35.663 "dma_device_id": "system", 00:09:35.663 "dma_device_type": 1 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.663 "dma_device_type": 2 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "dma_device_id": "system", 00:09:35.663 "dma_device_type": 1 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.663 "dma_device_type": 2 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "dma_device_id": "system", 00:09:35.663 "dma_device_type": 1 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.663 "dma_device_type": 2 00:09:35.663 } 00:09:35.663 ], 00:09:35.663 "driver_specific": { 00:09:35.663 "raid": { 00:09:35.663 "uuid": "67fc3713-4056-46d8-b881-ab8cb9ce4464", 00:09:35.663 "strip_size_kb": 0, 00:09:35.663 "state": "online", 00:09:35.663 "raid_level": "raid1", 00:09:35.663 "superblock": false, 00:09:35.663 "num_base_bdevs": 3, 00:09:35.663 "num_base_bdevs_discovered": 3, 00:09:35.663 "num_base_bdevs_operational": 3, 00:09:35.663 "base_bdevs_list": [ 00:09:35.663 { 00:09:35.663 "name": "BaseBdev1", 00:09:35.663 "uuid": "fda2ecd2-d082-4b2f-8c21-1566df48020b", 00:09:35.663 "is_configured": true, 00:09:35.663 "data_offset": 0, 00:09:35.663 "data_size": 65536 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "name": "BaseBdev2", 00:09:35.663 "uuid": "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02", 00:09:35.663 "is_configured": true, 00:09:35.663 "data_offset": 0, 00:09:35.663 "data_size": 65536 00:09:35.663 }, 00:09:35.663 { 00:09:35.663 "name": "BaseBdev3", 00:09:35.663 "uuid": "90c7c5c8-0c84-41f6-a2a4-f419d01dd6a0", 00:09:35.663 "is_configured": true, 00:09:35.663 "data_offset": 0, 00:09:35.663 "data_size": 65536 00:09:35.663 } 00:09:35.663 ] 00:09:35.663 } 00:09:35.663 } 00:09:35.663 }' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.663 BaseBdev2 00:09:35.663 BaseBdev3' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 [2024-11-28 18:50:05.213880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.663 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.922 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.922 "name": "Existed_Raid", 00:09:35.922 "uuid": "67fc3713-4056-46d8-b881-ab8cb9ce4464", 00:09:35.922 "strip_size_kb": 0, 00:09:35.922 "state": "online", 00:09:35.922 "raid_level": "raid1", 00:09:35.922 "superblock": false, 00:09:35.922 "num_base_bdevs": 3, 00:09:35.922 "num_base_bdevs_discovered": 2, 00:09:35.922 "num_base_bdevs_operational": 2, 00:09:35.922 "base_bdevs_list": [ 00:09:35.922 { 00:09:35.922 "name": null, 00:09:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.922 "is_configured": false, 00:09:35.922 "data_offset": 0, 00:09:35.922 "data_size": 65536 00:09:35.922 }, 00:09:35.922 { 00:09:35.922 "name": "BaseBdev2", 00:09:35.922 "uuid": "3efe91c4-f999-4b3c-af4a-0b3d5dd75d02", 00:09:35.922 "is_configured": true, 00:09:35.922 "data_offset": 0, 00:09:35.922 "data_size": 65536 00:09:35.922 }, 00:09:35.922 { 00:09:35.922 "name": "BaseBdev3", 00:09:35.922 "uuid": "90c7c5c8-0c84-41f6-a2a4-f419d01dd6a0", 00:09:35.922 "is_configured": true, 00:09:35.922 "data_offset": 0, 00:09:35.922 "data_size": 65536 00:09:35.922 } 00:09:35.922 ] 00:09:35.922 }' 00:09:35.922 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.922 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.182 [2024-11-28 18:50:05.705320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.182 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.183 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.183 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.183 [2024-11-28 18:50:05.776624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.183 [2024-11-28 18:50:05.776714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.443 [2024-11-28 18:50:05.788382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.443 [2024-11-28 18:50:05.788458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.443 [2024-11-28 18:50:05.788472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 BaseBdev2 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 [ 00:09:36.443 { 00:09:36.443 "name": "BaseBdev2", 00:09:36.443 "aliases": [ 00:09:36.443 "6dcfb20d-11b4-4396-8d35-5623c94ca7d2" 00:09:36.443 ], 00:09:36.443 "product_name": "Malloc disk", 00:09:36.443 "block_size": 512, 00:09:36.443 "num_blocks": 65536, 00:09:36.443 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:36.443 "assigned_rate_limits": { 00:09:36.443 "rw_ios_per_sec": 0, 00:09:36.443 "rw_mbytes_per_sec": 0, 00:09:36.443 "r_mbytes_per_sec": 0, 00:09:36.443 "w_mbytes_per_sec": 0 00:09:36.443 }, 00:09:36.443 "claimed": false, 00:09:36.443 "zoned": false, 00:09:36.443 "supported_io_types": { 00:09:36.443 "read": true, 00:09:36.443 "write": true, 00:09:36.443 "unmap": true, 00:09:36.443 "flush": true, 00:09:36.443 "reset": true, 00:09:36.443 "nvme_admin": false, 00:09:36.443 "nvme_io": false, 00:09:36.443 "nvme_io_md": false, 00:09:36.443 "write_zeroes": true, 00:09:36.443 "zcopy": true, 00:09:36.443 "get_zone_info": false, 00:09:36.443 "zone_management": false, 00:09:36.443 "zone_append": false, 00:09:36.443 "compare": false, 00:09:36.443 "compare_and_write": false, 00:09:36.443 "abort": true, 00:09:36.443 "seek_hole": false, 00:09:36.443 "seek_data": false, 00:09:36.443 "copy": true, 00:09:36.443 "nvme_iov_md": false 00:09:36.443 }, 00:09:36.443 "memory_domains": [ 00:09:36.443 { 00:09:36.443 "dma_device_id": "system", 00:09:36.443 "dma_device_type": 1 00:09:36.443 }, 00:09:36.443 { 00:09:36.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.443 "dma_device_type": 2 00:09:36.443 } 00:09:36.443 ], 00:09:36.443 "driver_specific": {} 00:09:36.443 } 00:09:36.443 ] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 BaseBdev3 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 [ 00:09:36.444 { 00:09:36.444 "name": "BaseBdev3", 00:09:36.444 "aliases": [ 00:09:36.444 "35870cb6-b20e-4137-9344-5d0777a6b101" 00:09:36.444 ], 00:09:36.444 "product_name": "Malloc disk", 00:09:36.444 "block_size": 512, 00:09:36.444 "num_blocks": 65536, 00:09:36.444 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:36.444 "assigned_rate_limits": { 00:09:36.444 "rw_ios_per_sec": 0, 00:09:36.444 "rw_mbytes_per_sec": 0, 00:09:36.444 "r_mbytes_per_sec": 0, 00:09:36.444 "w_mbytes_per_sec": 0 00:09:36.444 }, 00:09:36.444 "claimed": false, 00:09:36.444 "zoned": false, 00:09:36.444 "supported_io_types": { 00:09:36.444 "read": true, 00:09:36.444 "write": true, 00:09:36.444 "unmap": true, 00:09:36.444 "flush": true, 00:09:36.444 "reset": true, 00:09:36.444 "nvme_admin": false, 00:09:36.444 "nvme_io": false, 00:09:36.444 "nvme_io_md": false, 00:09:36.444 "write_zeroes": true, 00:09:36.444 "zcopy": true, 00:09:36.444 "get_zone_info": false, 00:09:36.444 "zone_management": false, 00:09:36.444 "zone_append": false, 00:09:36.444 "compare": false, 00:09:36.444 "compare_and_write": false, 00:09:36.444 "abort": true, 00:09:36.444 "seek_hole": false, 00:09:36.444 "seek_data": false, 00:09:36.444 "copy": true, 00:09:36.444 "nvme_iov_md": false 00:09:36.444 }, 00:09:36.444 "memory_domains": [ 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 } 00:09:36.444 ], 00:09:36.444 "driver_specific": {} 00:09:36.444 } 00:09:36.444 ] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 [2024-11-28 18:50:05.955569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.444 [2024-11-28 18:50:05.955652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.444 [2024-11-28 18:50:05.955701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.444 [2024-11-28 18:50:05.957461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.444 "name": "Existed_Raid", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.444 "strip_size_kb": 0, 00:09:36.444 "state": "configuring", 00:09:36.444 "raid_level": "raid1", 00:09:36.444 "superblock": false, 00:09:36.444 "num_base_bdevs": 3, 00:09:36.444 "num_base_bdevs_discovered": 2, 00:09:36.444 "num_base_bdevs_operational": 3, 00:09:36.444 "base_bdevs_list": [ 00:09:36.444 { 00:09:36.444 "name": "BaseBdev1", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.444 "is_configured": false, 00:09:36.444 "data_offset": 0, 00:09:36.444 "data_size": 0 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "BaseBdev2", 00:09:36.444 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 0, 00:09:36.444 "data_size": 65536 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "BaseBdev3", 00:09:36.444 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 0, 00:09:36.444 "data_size": 65536 00:09:36.444 } 00:09:36.444 ] 00:09:36.444 }' 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.444 18:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.014 [2024-11-28 18:50:06.351657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.014 "name": "Existed_Raid", 00:09:37.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.014 "strip_size_kb": 0, 00:09:37.014 "state": "configuring", 00:09:37.014 "raid_level": "raid1", 00:09:37.014 "superblock": false, 00:09:37.014 "num_base_bdevs": 3, 00:09:37.014 "num_base_bdevs_discovered": 1, 00:09:37.014 "num_base_bdevs_operational": 3, 00:09:37.014 "base_bdevs_list": [ 00:09:37.014 { 00:09:37.014 "name": "BaseBdev1", 00:09:37.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.014 "is_configured": false, 00:09:37.014 "data_offset": 0, 00:09:37.014 "data_size": 0 00:09:37.014 }, 00:09:37.014 { 00:09:37.014 "name": null, 00:09:37.014 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:37.014 "is_configured": false, 00:09:37.014 "data_offset": 0, 00:09:37.014 "data_size": 65536 00:09:37.014 }, 00:09:37.014 { 00:09:37.014 "name": "BaseBdev3", 00:09:37.014 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:37.014 "is_configured": true, 00:09:37.014 "data_offset": 0, 00:09:37.014 "data_size": 65536 00:09:37.014 } 00:09:37.014 ] 00:09:37.014 }' 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.014 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 [2024-11-28 18:50:06.806635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.275 BaseBdev1 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.275 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 [ 00:09:37.275 { 00:09:37.275 "name": "BaseBdev1", 00:09:37.275 "aliases": [ 00:09:37.275 "ef074417-ec07-42b3-ab37-1d4a34807e40" 00:09:37.275 ], 00:09:37.275 "product_name": "Malloc disk", 00:09:37.275 "block_size": 512, 00:09:37.275 "num_blocks": 65536, 00:09:37.275 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:37.275 "assigned_rate_limits": { 00:09:37.275 "rw_ios_per_sec": 0, 00:09:37.275 "rw_mbytes_per_sec": 0, 00:09:37.275 "r_mbytes_per_sec": 0, 00:09:37.275 "w_mbytes_per_sec": 0 00:09:37.275 }, 00:09:37.275 "claimed": true, 00:09:37.275 "claim_type": "exclusive_write", 00:09:37.275 "zoned": false, 00:09:37.275 "supported_io_types": { 00:09:37.275 "read": true, 00:09:37.275 "write": true, 00:09:37.275 "unmap": true, 00:09:37.275 "flush": true, 00:09:37.275 "reset": true, 00:09:37.275 "nvme_admin": false, 00:09:37.275 "nvme_io": false, 00:09:37.275 "nvme_io_md": false, 00:09:37.275 "write_zeroes": true, 00:09:37.275 "zcopy": true, 00:09:37.275 "get_zone_info": false, 00:09:37.275 "zone_management": false, 00:09:37.275 "zone_append": false, 00:09:37.275 "compare": false, 00:09:37.275 "compare_and_write": false, 00:09:37.275 "abort": true, 00:09:37.275 "seek_hole": false, 00:09:37.275 "seek_data": false, 00:09:37.275 "copy": true, 00:09:37.275 "nvme_iov_md": false 00:09:37.275 }, 00:09:37.275 "memory_domains": [ 00:09:37.275 { 00:09:37.275 "dma_device_id": "system", 00:09:37.275 "dma_device_type": 1 00:09:37.276 }, 00:09:37.276 { 00:09:37.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.276 "dma_device_type": 2 00:09:37.276 } 00:09:37.276 ], 00:09:37.276 "driver_specific": {} 00:09:37.276 } 00:09:37.276 ] 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.276 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.536 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.536 "name": "Existed_Raid", 00:09:37.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.536 "strip_size_kb": 0, 00:09:37.536 "state": "configuring", 00:09:37.536 "raid_level": "raid1", 00:09:37.536 "superblock": false, 00:09:37.536 "num_base_bdevs": 3, 00:09:37.536 "num_base_bdevs_discovered": 2, 00:09:37.536 "num_base_bdevs_operational": 3, 00:09:37.536 "base_bdevs_list": [ 00:09:37.536 { 00:09:37.536 "name": "BaseBdev1", 00:09:37.536 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:37.536 "is_configured": true, 00:09:37.536 "data_offset": 0, 00:09:37.536 "data_size": 65536 00:09:37.536 }, 00:09:37.536 { 00:09:37.536 "name": null, 00:09:37.536 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:37.536 "is_configured": false, 00:09:37.536 "data_offset": 0, 00:09:37.536 "data_size": 65536 00:09:37.536 }, 00:09:37.536 { 00:09:37.536 "name": "BaseBdev3", 00:09:37.536 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:37.536 "is_configured": true, 00:09:37.536 "data_offset": 0, 00:09:37.536 "data_size": 65536 00:09:37.536 } 00:09:37.536 ] 00:09:37.536 }' 00:09:37.536 18:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.536 18:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.797 [2024-11-28 18:50:07.350835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.797 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.798 "name": "Existed_Raid", 00:09:37.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.798 "strip_size_kb": 0, 00:09:37.798 "state": "configuring", 00:09:37.798 "raid_level": "raid1", 00:09:37.798 "superblock": false, 00:09:37.798 "num_base_bdevs": 3, 00:09:37.798 "num_base_bdevs_discovered": 1, 00:09:37.798 "num_base_bdevs_operational": 3, 00:09:37.798 "base_bdevs_list": [ 00:09:37.798 { 00:09:37.798 "name": "BaseBdev1", 00:09:37.798 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:37.798 "is_configured": true, 00:09:37.798 "data_offset": 0, 00:09:37.798 "data_size": 65536 00:09:37.798 }, 00:09:37.798 { 00:09:37.798 "name": null, 00:09:37.798 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:37.798 "is_configured": false, 00:09:37.798 "data_offset": 0, 00:09:37.798 "data_size": 65536 00:09:37.798 }, 00:09:37.798 { 00:09:37.798 "name": null, 00:09:37.798 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:37.798 "is_configured": false, 00:09:37.798 "data_offset": 0, 00:09:37.798 "data_size": 65536 00:09:37.798 } 00:09:37.798 ] 00:09:37.798 }' 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.798 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 [2024-11-28 18:50:07.810985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.367 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.368 "name": "Existed_Raid", 00:09:38.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.368 "strip_size_kb": 0, 00:09:38.368 "state": "configuring", 00:09:38.368 "raid_level": "raid1", 00:09:38.368 "superblock": false, 00:09:38.368 "num_base_bdevs": 3, 00:09:38.368 "num_base_bdevs_discovered": 2, 00:09:38.368 "num_base_bdevs_operational": 3, 00:09:38.368 "base_bdevs_list": [ 00:09:38.368 { 00:09:38.368 "name": "BaseBdev1", 00:09:38.368 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:38.368 "is_configured": true, 00:09:38.368 "data_offset": 0, 00:09:38.368 "data_size": 65536 00:09:38.368 }, 00:09:38.368 { 00:09:38.368 "name": null, 00:09:38.368 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:38.368 "is_configured": false, 00:09:38.368 "data_offset": 0, 00:09:38.368 "data_size": 65536 00:09:38.368 }, 00:09:38.368 { 00:09:38.368 "name": "BaseBdev3", 00:09:38.368 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:38.368 "is_configured": true, 00:09:38.368 "data_offset": 0, 00:09:38.368 "data_size": 65536 00:09:38.368 } 00:09:38.368 ] 00:09:38.368 }' 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.368 18:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.627 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.627 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.627 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 [2024-11-28 18:50:08.271138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.887 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.887 "name": "Existed_Raid", 00:09:38.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.887 "strip_size_kb": 0, 00:09:38.887 "state": "configuring", 00:09:38.887 "raid_level": "raid1", 00:09:38.887 "superblock": false, 00:09:38.887 "num_base_bdevs": 3, 00:09:38.888 "num_base_bdevs_discovered": 1, 00:09:38.888 "num_base_bdevs_operational": 3, 00:09:38.888 "base_bdevs_list": [ 00:09:38.888 { 00:09:38.888 "name": null, 00:09:38.888 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:38.888 "is_configured": false, 00:09:38.888 "data_offset": 0, 00:09:38.888 "data_size": 65536 00:09:38.888 }, 00:09:38.888 { 00:09:38.888 "name": null, 00:09:38.888 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:38.888 "is_configured": false, 00:09:38.888 "data_offset": 0, 00:09:38.888 "data_size": 65536 00:09:38.888 }, 00:09:38.888 { 00:09:38.888 "name": "BaseBdev3", 00:09:38.888 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:38.888 "is_configured": true, 00:09:38.888 "data_offset": 0, 00:09:38.888 "data_size": 65536 00:09:38.888 } 00:09:38.888 ] 00:09:38.888 }' 00:09:38.888 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.888 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 [2024-11-28 18:50:08.749796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.464 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.464 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.464 "name": "Existed_Raid", 00:09:39.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.464 "strip_size_kb": 0, 00:09:39.464 "state": "configuring", 00:09:39.464 "raid_level": "raid1", 00:09:39.464 "superblock": false, 00:09:39.464 "num_base_bdevs": 3, 00:09:39.464 "num_base_bdevs_discovered": 2, 00:09:39.464 "num_base_bdevs_operational": 3, 00:09:39.464 "base_bdevs_list": [ 00:09:39.464 { 00:09:39.464 "name": null, 00:09:39.464 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:39.464 "is_configured": false, 00:09:39.464 "data_offset": 0, 00:09:39.464 "data_size": 65536 00:09:39.464 }, 00:09:39.464 { 00:09:39.464 "name": "BaseBdev2", 00:09:39.464 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:39.464 "is_configured": true, 00:09:39.464 "data_offset": 0, 00:09:39.464 "data_size": 65536 00:09:39.464 }, 00:09:39.464 { 00:09:39.464 "name": "BaseBdev3", 00:09:39.464 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:39.464 "is_configured": true, 00:09:39.464 "data_offset": 0, 00:09:39.464 "data_size": 65536 00:09:39.464 } 00:09:39.464 ] 00:09:39.464 }' 00:09:39.464 18:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.464 18:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef074417-ec07-42b3-ab37-1d4a34807e40 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 [2024-11-28 18:50:09.260826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.724 [2024-11-28 18:50:09.260925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:39.724 [2024-11-28 18:50:09.260942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:39.724 [2024-11-28 18:50:09.261205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:39.724 [2024-11-28 18:50:09.261328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:39.724 [2024-11-28 18:50:09.261336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:39.724 [2024-11-28 18:50:09.261532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.724 NewBaseBdev 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 [ 00:09:39.724 { 00:09:39.724 "name": "NewBaseBdev", 00:09:39.724 "aliases": [ 00:09:39.724 "ef074417-ec07-42b3-ab37-1d4a34807e40" 00:09:39.724 ], 00:09:39.724 "product_name": "Malloc disk", 00:09:39.724 "block_size": 512, 00:09:39.724 "num_blocks": 65536, 00:09:39.724 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:39.724 "assigned_rate_limits": { 00:09:39.724 "rw_ios_per_sec": 0, 00:09:39.724 "rw_mbytes_per_sec": 0, 00:09:39.724 "r_mbytes_per_sec": 0, 00:09:39.724 "w_mbytes_per_sec": 0 00:09:39.724 }, 00:09:39.724 "claimed": true, 00:09:39.724 "claim_type": "exclusive_write", 00:09:39.724 "zoned": false, 00:09:39.724 "supported_io_types": { 00:09:39.724 "read": true, 00:09:39.724 "write": true, 00:09:39.724 "unmap": true, 00:09:39.724 "flush": true, 00:09:39.724 "reset": true, 00:09:39.724 "nvme_admin": false, 00:09:39.724 "nvme_io": false, 00:09:39.724 "nvme_io_md": false, 00:09:39.724 "write_zeroes": true, 00:09:39.724 "zcopy": true, 00:09:39.724 "get_zone_info": false, 00:09:39.724 "zone_management": false, 00:09:39.724 "zone_append": false, 00:09:39.724 "compare": false, 00:09:39.724 "compare_and_write": false, 00:09:39.724 "abort": true, 00:09:39.724 "seek_hole": false, 00:09:39.724 "seek_data": false, 00:09:39.724 "copy": true, 00:09:39.724 "nvme_iov_md": false 00:09:39.724 }, 00:09:39.724 "memory_domains": [ 00:09:39.724 { 00:09:39.724 "dma_device_id": "system", 00:09:39.724 "dma_device_type": 1 00:09:39.724 }, 00:09:39.724 { 00:09:39.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.724 "dma_device_type": 2 00:09:39.724 } 00:09:39.724 ], 00:09:39.724 "driver_specific": {} 00:09:39.724 } 00:09:39.724 ] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.724 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.984 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.984 "name": "Existed_Raid", 00:09:39.984 "uuid": "72c12c22-d136-4894-b49d-b1b2a8c2bee6", 00:09:39.984 "strip_size_kb": 0, 00:09:39.984 "state": "online", 00:09:39.984 "raid_level": "raid1", 00:09:39.984 "superblock": false, 00:09:39.984 "num_base_bdevs": 3, 00:09:39.984 "num_base_bdevs_discovered": 3, 00:09:39.984 "num_base_bdevs_operational": 3, 00:09:39.984 "base_bdevs_list": [ 00:09:39.984 { 00:09:39.984 "name": "NewBaseBdev", 00:09:39.984 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:39.984 "is_configured": true, 00:09:39.984 "data_offset": 0, 00:09:39.984 "data_size": 65536 00:09:39.984 }, 00:09:39.984 { 00:09:39.984 "name": "BaseBdev2", 00:09:39.984 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:39.984 "is_configured": true, 00:09:39.984 "data_offset": 0, 00:09:39.984 "data_size": 65536 00:09:39.984 }, 00:09:39.984 { 00:09:39.984 "name": "BaseBdev3", 00:09:39.984 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:39.984 "is_configured": true, 00:09:39.984 "data_offset": 0, 00:09:39.984 "data_size": 65536 00:09:39.984 } 00:09:39.984 ] 00:09:39.984 }' 00:09:39.984 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.984 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.244 [2024-11-28 18:50:09.721265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.244 "name": "Existed_Raid", 00:09:40.244 "aliases": [ 00:09:40.244 "72c12c22-d136-4894-b49d-b1b2a8c2bee6" 00:09:40.244 ], 00:09:40.244 "product_name": "Raid Volume", 00:09:40.244 "block_size": 512, 00:09:40.244 "num_blocks": 65536, 00:09:40.244 "uuid": "72c12c22-d136-4894-b49d-b1b2a8c2bee6", 00:09:40.244 "assigned_rate_limits": { 00:09:40.244 "rw_ios_per_sec": 0, 00:09:40.244 "rw_mbytes_per_sec": 0, 00:09:40.244 "r_mbytes_per_sec": 0, 00:09:40.244 "w_mbytes_per_sec": 0 00:09:40.244 }, 00:09:40.244 "claimed": false, 00:09:40.244 "zoned": false, 00:09:40.244 "supported_io_types": { 00:09:40.244 "read": true, 00:09:40.244 "write": true, 00:09:40.244 "unmap": false, 00:09:40.244 "flush": false, 00:09:40.244 "reset": true, 00:09:40.244 "nvme_admin": false, 00:09:40.244 "nvme_io": false, 00:09:40.244 "nvme_io_md": false, 00:09:40.244 "write_zeroes": true, 00:09:40.244 "zcopy": false, 00:09:40.244 "get_zone_info": false, 00:09:40.244 "zone_management": false, 00:09:40.244 "zone_append": false, 00:09:40.244 "compare": false, 00:09:40.244 "compare_and_write": false, 00:09:40.244 "abort": false, 00:09:40.244 "seek_hole": false, 00:09:40.244 "seek_data": false, 00:09:40.244 "copy": false, 00:09:40.244 "nvme_iov_md": false 00:09:40.244 }, 00:09:40.244 "memory_domains": [ 00:09:40.244 { 00:09:40.244 "dma_device_id": "system", 00:09:40.244 "dma_device_type": 1 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.244 "dma_device_type": 2 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "dma_device_id": "system", 00:09:40.244 "dma_device_type": 1 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.244 "dma_device_type": 2 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "dma_device_id": "system", 00:09:40.244 "dma_device_type": 1 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.244 "dma_device_type": 2 00:09:40.244 } 00:09:40.244 ], 00:09:40.244 "driver_specific": { 00:09:40.244 "raid": { 00:09:40.244 "uuid": "72c12c22-d136-4894-b49d-b1b2a8c2bee6", 00:09:40.244 "strip_size_kb": 0, 00:09:40.244 "state": "online", 00:09:40.244 "raid_level": "raid1", 00:09:40.244 "superblock": false, 00:09:40.244 "num_base_bdevs": 3, 00:09:40.244 "num_base_bdevs_discovered": 3, 00:09:40.244 "num_base_bdevs_operational": 3, 00:09:40.244 "base_bdevs_list": [ 00:09:40.244 { 00:09:40.244 "name": "NewBaseBdev", 00:09:40.244 "uuid": "ef074417-ec07-42b3-ab37-1d4a34807e40", 00:09:40.244 "is_configured": true, 00:09:40.244 "data_offset": 0, 00:09:40.244 "data_size": 65536 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "name": "BaseBdev2", 00:09:40.244 "uuid": "6dcfb20d-11b4-4396-8d35-5623c94ca7d2", 00:09:40.244 "is_configured": true, 00:09:40.244 "data_offset": 0, 00:09:40.244 "data_size": 65536 00:09:40.244 }, 00:09:40.244 { 00:09:40.244 "name": "BaseBdev3", 00:09:40.244 "uuid": "35870cb6-b20e-4137-9344-5d0777a6b101", 00:09:40.244 "is_configured": true, 00:09:40.244 "data_offset": 0, 00:09:40.244 "data_size": 65536 00:09:40.244 } 00:09:40.244 ] 00:09:40.244 } 00:09:40.244 } 00:09:40.244 }' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.244 BaseBdev2 00:09:40.244 BaseBdev3' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.244 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.504 [2024-11-28 18:50:09.989047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.504 [2024-11-28 18:50:09.989074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.504 [2024-11-28 18:50:09.989135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.504 [2024-11-28 18:50:09.989377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.504 [2024-11-28 18:50:09.989390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79977 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79977 00:09:40.504 18:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79977 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79977' 00:09:40.504 killing process with pid 79977 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 79977 00:09:40.504 [2024-11-28 18:50:10.041167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.504 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 79977 00:09:40.504 [2024-11-28 18:50:10.072528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.764 00:09:40.764 real 0m8.604s 00:09:40.764 user 0m14.727s 00:09:40.764 sys 0m1.678s 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.764 ************************************ 00:09:40.764 END TEST raid_state_function_test 00:09:40.764 ************************************ 00:09:40.764 18:50:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:40.764 18:50:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:40.764 18:50:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.764 18:50:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.764 ************************************ 00:09:40.764 START TEST raid_state_function_test_sb 00:09:40.764 ************************************ 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:40.764 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80582 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80582' 00:09:41.024 Process raid pid: 80582 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80582 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80582 ']' 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.024 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.024 [2024-11-28 18:50:10.457986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:41.024 [2024-11-28 18:50:10.458190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.024 [2024-11-28 18:50:10.593365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:41.284 [2024-11-28 18:50:10.631308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.284 [2024-11-28 18:50:10.656122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.284 [2024-11-28 18:50:10.698698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.284 [2024-11-28 18:50:10.698792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.854 [2024-11-28 18:50:11.278422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.854 [2024-11-28 18:50:11.278525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.854 [2024-11-28 18:50:11.278556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.854 [2024-11-28 18:50:11.278577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.854 [2024-11-28 18:50:11.278600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.854 [2024-11-28 18:50:11.278618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.854 "name": "Existed_Raid", 00:09:41.854 "uuid": "a02d7747-3882-4096-8a53-2ef625b5d42b", 00:09:41.854 "strip_size_kb": 0, 00:09:41.854 "state": "configuring", 00:09:41.854 "raid_level": "raid1", 00:09:41.854 "superblock": true, 00:09:41.854 "num_base_bdevs": 3, 00:09:41.854 "num_base_bdevs_discovered": 0, 00:09:41.854 "num_base_bdevs_operational": 3, 00:09:41.854 "base_bdevs_list": [ 00:09:41.854 { 00:09:41.854 "name": "BaseBdev1", 00:09:41.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.854 "is_configured": false, 00:09:41.854 "data_offset": 0, 00:09:41.854 "data_size": 0 00:09:41.854 }, 00:09:41.854 { 00:09:41.854 "name": "BaseBdev2", 00:09:41.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.854 "is_configured": false, 00:09:41.854 "data_offset": 0, 00:09:41.854 "data_size": 0 00:09:41.854 }, 00:09:41.854 { 00:09:41.854 "name": "BaseBdev3", 00:09:41.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.854 "is_configured": false, 00:09:41.854 "data_offset": 0, 00:09:41.854 "data_size": 0 00:09:41.854 } 00:09:41.854 ] 00:09:41.854 }' 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.854 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.113 [2024-11-28 18:50:11.690434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.113 [2024-11-28 18:50:11.690508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.113 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.113 [2024-11-28 18:50:11.702486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.114 [2024-11-28 18:50:11.702520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.114 [2024-11-28 18:50:11.702531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.114 [2024-11-28 18:50:11.702538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.114 [2024-11-28 18:50:11.702546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.114 [2024-11-28 18:50:11.702552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.114 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.114 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.114 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.114 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.373 [2024-11-28 18:50:11.723570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.373 BaseBdev1 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.374 [ 00:09:42.374 { 00:09:42.374 "name": "BaseBdev1", 00:09:42.374 "aliases": [ 00:09:42.374 "5c815615-7aed-428f-9425-7b6868518d36" 00:09:42.374 ], 00:09:42.374 "product_name": "Malloc disk", 00:09:42.374 "block_size": 512, 00:09:42.374 "num_blocks": 65536, 00:09:42.374 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:42.374 "assigned_rate_limits": { 00:09:42.374 "rw_ios_per_sec": 0, 00:09:42.374 "rw_mbytes_per_sec": 0, 00:09:42.374 "r_mbytes_per_sec": 0, 00:09:42.374 "w_mbytes_per_sec": 0 00:09:42.374 }, 00:09:42.374 "claimed": true, 00:09:42.374 "claim_type": "exclusive_write", 00:09:42.374 "zoned": false, 00:09:42.374 "supported_io_types": { 00:09:42.374 "read": true, 00:09:42.374 "write": true, 00:09:42.374 "unmap": true, 00:09:42.374 "flush": true, 00:09:42.374 "reset": true, 00:09:42.374 "nvme_admin": false, 00:09:42.374 "nvme_io": false, 00:09:42.374 "nvme_io_md": false, 00:09:42.374 "write_zeroes": true, 00:09:42.374 "zcopy": true, 00:09:42.374 "get_zone_info": false, 00:09:42.374 "zone_management": false, 00:09:42.374 "zone_append": false, 00:09:42.374 "compare": false, 00:09:42.374 "compare_and_write": false, 00:09:42.374 "abort": true, 00:09:42.374 "seek_hole": false, 00:09:42.374 "seek_data": false, 00:09:42.374 "copy": true, 00:09:42.374 "nvme_iov_md": false 00:09:42.374 }, 00:09:42.374 "memory_domains": [ 00:09:42.374 { 00:09:42.374 "dma_device_id": "system", 00:09:42.374 "dma_device_type": 1 00:09:42.374 }, 00:09:42.374 { 00:09:42.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.374 "dma_device_type": 2 00:09:42.374 } 00:09:42.374 ], 00:09:42.374 "driver_specific": {} 00:09:42.374 } 00:09:42.374 ] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.374 "name": "Existed_Raid", 00:09:42.374 "uuid": "d4122d6d-58ca-43c9-baba-f8e1bd44510f", 00:09:42.374 "strip_size_kb": 0, 00:09:42.374 "state": "configuring", 00:09:42.374 "raid_level": "raid1", 00:09:42.374 "superblock": true, 00:09:42.374 "num_base_bdevs": 3, 00:09:42.374 "num_base_bdevs_discovered": 1, 00:09:42.374 "num_base_bdevs_operational": 3, 00:09:42.374 "base_bdevs_list": [ 00:09:42.374 { 00:09:42.374 "name": "BaseBdev1", 00:09:42.374 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:42.374 "is_configured": true, 00:09:42.374 "data_offset": 2048, 00:09:42.374 "data_size": 63488 00:09:42.374 }, 00:09:42.374 { 00:09:42.374 "name": "BaseBdev2", 00:09:42.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.374 "is_configured": false, 00:09:42.374 "data_offset": 0, 00:09:42.374 "data_size": 0 00:09:42.374 }, 00:09:42.374 { 00:09:42.374 "name": "BaseBdev3", 00:09:42.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.374 "is_configured": false, 00:09:42.374 "data_offset": 0, 00:09:42.374 "data_size": 0 00:09:42.374 } 00:09:42.374 ] 00:09:42.374 }' 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.374 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 [2024-11-28 18:50:12.195716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.634 [2024-11-28 18:50:12.195811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 [2024-11-28 18:50:12.207751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.634 [2024-11-28 18:50:12.209565] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.634 [2024-11-28 18:50:12.209602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.634 [2024-11-28 18:50:12.209613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.634 [2024-11-28 18:50:12.209621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.634 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.893 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.893 "name": "Existed_Raid", 00:09:42.894 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:42.894 "strip_size_kb": 0, 00:09:42.894 "state": "configuring", 00:09:42.894 "raid_level": "raid1", 00:09:42.894 "superblock": true, 00:09:42.894 "num_base_bdevs": 3, 00:09:42.894 "num_base_bdevs_discovered": 1, 00:09:42.894 "num_base_bdevs_operational": 3, 00:09:42.894 "base_bdevs_list": [ 00:09:42.894 { 00:09:42.894 "name": "BaseBdev1", 00:09:42.894 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:42.894 "is_configured": true, 00:09:42.894 "data_offset": 2048, 00:09:42.894 "data_size": 63488 00:09:42.894 }, 00:09:42.894 { 00:09:42.894 "name": "BaseBdev2", 00:09:42.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.894 "is_configured": false, 00:09:42.894 "data_offset": 0, 00:09:42.894 "data_size": 0 00:09:42.894 }, 00:09:42.894 { 00:09:42.894 "name": "BaseBdev3", 00:09:42.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.894 "is_configured": false, 00:09:42.894 "data_offset": 0, 00:09:42.894 "data_size": 0 00:09:42.894 } 00:09:42.894 ] 00:09:42.894 }' 00:09:42.894 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.894 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 [2024-11-28 18:50:12.654738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.154 BaseBdev2 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 [ 00:09:43.154 { 00:09:43.154 "name": "BaseBdev2", 00:09:43.154 "aliases": [ 00:09:43.154 "c959248f-9e98-411b-942f-766460ae8dcb" 00:09:43.154 ], 00:09:43.154 "product_name": "Malloc disk", 00:09:43.154 "block_size": 512, 00:09:43.154 "num_blocks": 65536, 00:09:43.154 "uuid": "c959248f-9e98-411b-942f-766460ae8dcb", 00:09:43.154 "assigned_rate_limits": { 00:09:43.154 "rw_ios_per_sec": 0, 00:09:43.154 "rw_mbytes_per_sec": 0, 00:09:43.154 "r_mbytes_per_sec": 0, 00:09:43.154 "w_mbytes_per_sec": 0 00:09:43.154 }, 00:09:43.154 "claimed": true, 00:09:43.154 "claim_type": "exclusive_write", 00:09:43.154 "zoned": false, 00:09:43.154 "supported_io_types": { 00:09:43.154 "read": true, 00:09:43.154 "write": true, 00:09:43.154 "unmap": true, 00:09:43.154 "flush": true, 00:09:43.154 "reset": true, 00:09:43.154 "nvme_admin": false, 00:09:43.154 "nvme_io": false, 00:09:43.154 "nvme_io_md": false, 00:09:43.154 "write_zeroes": true, 00:09:43.154 "zcopy": true, 00:09:43.154 "get_zone_info": false, 00:09:43.154 "zone_management": false, 00:09:43.154 "zone_append": false, 00:09:43.154 "compare": false, 00:09:43.154 "compare_and_write": false, 00:09:43.154 "abort": true, 00:09:43.154 "seek_hole": false, 00:09:43.154 "seek_data": false, 00:09:43.154 "copy": true, 00:09:43.154 "nvme_iov_md": false 00:09:43.154 }, 00:09:43.154 "memory_domains": [ 00:09:43.154 { 00:09:43.154 "dma_device_id": "system", 00:09:43.154 "dma_device_type": 1 00:09:43.154 }, 00:09:43.154 { 00:09:43.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.154 "dma_device_type": 2 00:09:43.154 } 00:09:43.154 ], 00:09:43.154 "driver_specific": {} 00:09:43.154 } 00:09:43.154 ] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.154 "name": "Existed_Raid", 00:09:43.154 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:43.154 "strip_size_kb": 0, 00:09:43.154 "state": "configuring", 00:09:43.154 "raid_level": "raid1", 00:09:43.154 "superblock": true, 00:09:43.154 "num_base_bdevs": 3, 00:09:43.154 "num_base_bdevs_discovered": 2, 00:09:43.154 "num_base_bdevs_operational": 3, 00:09:43.154 "base_bdevs_list": [ 00:09:43.154 { 00:09:43.154 "name": "BaseBdev1", 00:09:43.154 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:43.154 "is_configured": true, 00:09:43.154 "data_offset": 2048, 00:09:43.154 "data_size": 63488 00:09:43.154 }, 00:09:43.154 { 00:09:43.154 "name": "BaseBdev2", 00:09:43.154 "uuid": "c959248f-9e98-411b-942f-766460ae8dcb", 00:09:43.154 "is_configured": true, 00:09:43.154 "data_offset": 2048, 00:09:43.154 "data_size": 63488 00:09:43.154 }, 00:09:43.154 { 00:09:43.154 "name": "BaseBdev3", 00:09:43.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.154 "is_configured": false, 00:09:43.154 "data_offset": 0, 00:09:43.154 "data_size": 0 00:09:43.154 } 00:09:43.154 ] 00:09:43.154 }' 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.154 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.724 [2024-11-28 18:50:13.090840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.724 [2024-11-28 18:50:13.091689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:43.724 [2024-11-28 18:50:13.091869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.724 BaseBdev3 00:09:43.724 [2024-11-28 18:50:13.093046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:43.724 [2024-11-28 18:50:13.093690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:43.724 [2024-11-28 18:50:13.093865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:43.724 [2024-11-28 18:50:13.094464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.724 [ 00:09:43.724 { 00:09:43.724 "name": "BaseBdev3", 00:09:43.724 "aliases": [ 00:09:43.724 "080879e7-e136-4015-baff-8cbedf91f3b4" 00:09:43.724 ], 00:09:43.724 "product_name": "Malloc disk", 00:09:43.724 "block_size": 512, 00:09:43.724 "num_blocks": 65536, 00:09:43.724 "uuid": "080879e7-e136-4015-baff-8cbedf91f3b4", 00:09:43.724 "assigned_rate_limits": { 00:09:43.724 "rw_ios_per_sec": 0, 00:09:43.724 "rw_mbytes_per_sec": 0, 00:09:43.724 "r_mbytes_per_sec": 0, 00:09:43.724 "w_mbytes_per_sec": 0 00:09:43.724 }, 00:09:43.724 "claimed": true, 00:09:43.724 "claim_type": "exclusive_write", 00:09:43.724 "zoned": false, 00:09:43.724 "supported_io_types": { 00:09:43.724 "read": true, 00:09:43.724 "write": true, 00:09:43.724 "unmap": true, 00:09:43.724 "flush": true, 00:09:43.724 "reset": true, 00:09:43.724 "nvme_admin": false, 00:09:43.724 "nvme_io": false, 00:09:43.724 "nvme_io_md": false, 00:09:43.724 "write_zeroes": true, 00:09:43.724 "zcopy": true, 00:09:43.724 "get_zone_info": false, 00:09:43.724 "zone_management": false, 00:09:43.724 "zone_append": false, 00:09:43.724 "compare": false, 00:09:43.724 "compare_and_write": false, 00:09:43.724 "abort": true, 00:09:43.724 "seek_hole": false, 00:09:43.724 "seek_data": false, 00:09:43.724 "copy": true, 00:09:43.724 "nvme_iov_md": false 00:09:43.724 }, 00:09:43.724 "memory_domains": [ 00:09:43.724 { 00:09:43.724 "dma_device_id": "system", 00:09:43.724 "dma_device_type": 1 00:09:43.724 }, 00:09:43.724 { 00:09:43.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.724 "dma_device_type": 2 00:09:43.724 } 00:09:43.724 ], 00:09:43.724 "driver_specific": {} 00:09:43.724 } 00:09:43.724 ] 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.724 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.725 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.725 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.725 "name": "Existed_Raid", 00:09:43.725 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:43.725 "strip_size_kb": 0, 00:09:43.725 "state": "online", 00:09:43.725 "raid_level": "raid1", 00:09:43.725 "superblock": true, 00:09:43.725 "num_base_bdevs": 3, 00:09:43.725 "num_base_bdevs_discovered": 3, 00:09:43.725 "num_base_bdevs_operational": 3, 00:09:43.725 "base_bdevs_list": [ 00:09:43.725 { 00:09:43.725 "name": "BaseBdev1", 00:09:43.725 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:43.725 "is_configured": true, 00:09:43.725 "data_offset": 2048, 00:09:43.725 "data_size": 63488 00:09:43.725 }, 00:09:43.725 { 00:09:43.725 "name": "BaseBdev2", 00:09:43.725 "uuid": "c959248f-9e98-411b-942f-766460ae8dcb", 00:09:43.725 "is_configured": true, 00:09:43.725 "data_offset": 2048, 00:09:43.725 "data_size": 63488 00:09:43.725 }, 00:09:43.725 { 00:09:43.725 "name": "BaseBdev3", 00:09:43.725 "uuid": "080879e7-e136-4015-baff-8cbedf91f3b4", 00:09:43.725 "is_configured": true, 00:09:43.725 "data_offset": 2048, 00:09:43.725 "data_size": 63488 00:09:43.725 } 00:09:43.725 ] 00:09:43.725 }' 00:09:43.725 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.725 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.983 [2024-11-28 18:50:13.583209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.242 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.242 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.242 "name": "Existed_Raid", 00:09:44.243 "aliases": [ 00:09:44.243 "149cc78f-445e-440b-83d3-a2ea0f9b6a4e" 00:09:44.243 ], 00:09:44.243 "product_name": "Raid Volume", 00:09:44.243 "block_size": 512, 00:09:44.243 "num_blocks": 63488, 00:09:44.243 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:44.243 "assigned_rate_limits": { 00:09:44.243 "rw_ios_per_sec": 0, 00:09:44.243 "rw_mbytes_per_sec": 0, 00:09:44.243 "r_mbytes_per_sec": 0, 00:09:44.243 "w_mbytes_per_sec": 0 00:09:44.243 }, 00:09:44.243 "claimed": false, 00:09:44.243 "zoned": false, 00:09:44.243 "supported_io_types": { 00:09:44.243 "read": true, 00:09:44.243 "write": true, 00:09:44.243 "unmap": false, 00:09:44.243 "flush": false, 00:09:44.243 "reset": true, 00:09:44.243 "nvme_admin": false, 00:09:44.243 "nvme_io": false, 00:09:44.243 "nvme_io_md": false, 00:09:44.243 "write_zeroes": true, 00:09:44.243 "zcopy": false, 00:09:44.243 "get_zone_info": false, 00:09:44.243 "zone_management": false, 00:09:44.243 "zone_append": false, 00:09:44.243 "compare": false, 00:09:44.243 "compare_and_write": false, 00:09:44.243 "abort": false, 00:09:44.243 "seek_hole": false, 00:09:44.243 "seek_data": false, 00:09:44.243 "copy": false, 00:09:44.243 "nvme_iov_md": false 00:09:44.243 }, 00:09:44.243 "memory_domains": [ 00:09:44.243 { 00:09:44.243 "dma_device_id": "system", 00:09:44.243 "dma_device_type": 1 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.243 "dma_device_type": 2 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "dma_device_id": "system", 00:09:44.243 "dma_device_type": 1 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.243 "dma_device_type": 2 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "dma_device_id": "system", 00:09:44.243 "dma_device_type": 1 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.243 "dma_device_type": 2 00:09:44.243 } 00:09:44.243 ], 00:09:44.243 "driver_specific": { 00:09:44.243 "raid": { 00:09:44.243 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:44.243 "strip_size_kb": 0, 00:09:44.243 "state": "online", 00:09:44.243 "raid_level": "raid1", 00:09:44.243 "superblock": true, 00:09:44.243 "num_base_bdevs": 3, 00:09:44.243 "num_base_bdevs_discovered": 3, 00:09:44.243 "num_base_bdevs_operational": 3, 00:09:44.243 "base_bdevs_list": [ 00:09:44.243 { 00:09:44.243 "name": "BaseBdev1", 00:09:44.243 "uuid": "5c815615-7aed-428f-9425-7b6868518d36", 00:09:44.243 "is_configured": true, 00:09:44.243 "data_offset": 2048, 00:09:44.243 "data_size": 63488 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "name": "BaseBdev2", 00:09:44.243 "uuid": "c959248f-9e98-411b-942f-766460ae8dcb", 00:09:44.243 "is_configured": true, 00:09:44.243 "data_offset": 2048, 00:09:44.243 "data_size": 63488 00:09:44.243 }, 00:09:44.243 { 00:09:44.243 "name": "BaseBdev3", 00:09:44.243 "uuid": "080879e7-e136-4015-baff-8cbedf91f3b4", 00:09:44.243 "is_configured": true, 00:09:44.243 "data_offset": 2048, 00:09:44.243 "data_size": 63488 00:09:44.243 } 00:09:44.243 ] 00:09:44.243 } 00:09:44.243 } 00:09:44.243 }' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:44.243 BaseBdev2 00:09:44.243 BaseBdev3' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.243 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.243 [2024-11-28 18:50:13.839034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.502 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.502 "name": "Existed_Raid", 00:09:44.502 "uuid": "149cc78f-445e-440b-83d3-a2ea0f9b6a4e", 00:09:44.502 "strip_size_kb": 0, 00:09:44.502 "state": "online", 00:09:44.502 "raid_level": "raid1", 00:09:44.502 "superblock": true, 00:09:44.503 "num_base_bdevs": 3, 00:09:44.503 "num_base_bdevs_discovered": 2, 00:09:44.503 "num_base_bdevs_operational": 2, 00:09:44.503 "base_bdevs_list": [ 00:09:44.503 { 00:09:44.503 "name": null, 00:09:44.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.503 "is_configured": false, 00:09:44.503 "data_offset": 0, 00:09:44.503 "data_size": 63488 00:09:44.503 }, 00:09:44.503 { 00:09:44.503 "name": "BaseBdev2", 00:09:44.503 "uuid": "c959248f-9e98-411b-942f-766460ae8dcb", 00:09:44.503 "is_configured": true, 00:09:44.503 "data_offset": 2048, 00:09:44.503 "data_size": 63488 00:09:44.503 }, 00:09:44.503 { 00:09:44.503 "name": "BaseBdev3", 00:09:44.503 "uuid": "080879e7-e136-4015-baff-8cbedf91f3b4", 00:09:44.503 "is_configured": true, 00:09:44.503 "data_offset": 2048, 00:09:44.503 "data_size": 63488 00:09:44.503 } 00:09:44.503 ] 00:09:44.503 }' 00:09:44.503 18:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.503 18:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.762 [2024-11-28 18:50:14.342720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.762 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 [2024-11-28 18:50:14.413979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.022 [2024-11-28 18:50:14.414079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.022 [2024-11-28 18:50:14.425592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.022 [2024-11-28 18:50:14.425693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.022 [2024-11-28 18:50:14.425749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 BaseBdev2 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 [ 00:09:45.022 { 00:09:45.022 "name": "BaseBdev2", 00:09:45.022 "aliases": [ 00:09:45.022 "2eb1d1a5-171e-477e-a9f6-5908132f8344" 00:09:45.022 ], 00:09:45.022 "product_name": "Malloc disk", 00:09:45.022 "block_size": 512, 00:09:45.022 "num_blocks": 65536, 00:09:45.022 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:45.022 "assigned_rate_limits": { 00:09:45.022 "rw_ios_per_sec": 0, 00:09:45.022 "rw_mbytes_per_sec": 0, 00:09:45.022 "r_mbytes_per_sec": 0, 00:09:45.022 "w_mbytes_per_sec": 0 00:09:45.022 }, 00:09:45.022 "claimed": false, 00:09:45.022 "zoned": false, 00:09:45.022 "supported_io_types": { 00:09:45.022 "read": true, 00:09:45.022 "write": true, 00:09:45.022 "unmap": true, 00:09:45.022 "flush": true, 00:09:45.022 "reset": true, 00:09:45.022 "nvme_admin": false, 00:09:45.022 "nvme_io": false, 00:09:45.022 "nvme_io_md": false, 00:09:45.022 "write_zeroes": true, 00:09:45.022 "zcopy": true, 00:09:45.022 "get_zone_info": false, 00:09:45.022 "zone_management": false, 00:09:45.022 "zone_append": false, 00:09:45.022 "compare": false, 00:09:45.022 "compare_and_write": false, 00:09:45.022 "abort": true, 00:09:45.022 "seek_hole": false, 00:09:45.022 "seek_data": false, 00:09:45.022 "copy": true, 00:09:45.022 "nvme_iov_md": false 00:09:45.022 }, 00:09:45.022 "memory_domains": [ 00:09:45.022 { 00:09:45.022 "dma_device_id": "system", 00:09:45.022 "dma_device_type": 1 00:09:45.022 }, 00:09:45.022 { 00:09:45.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.022 "dma_device_type": 2 00:09:45.022 } 00:09:45.022 ], 00:09:45.022 "driver_specific": {} 00:09:45.022 } 00:09:45.022 ] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 BaseBdev3 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 [ 00:09:45.022 { 00:09:45.022 "name": "BaseBdev3", 00:09:45.022 "aliases": [ 00:09:45.022 "68d3e2fe-fe24-4e23-a153-e51e853c8d2e" 00:09:45.022 ], 00:09:45.022 "product_name": "Malloc disk", 00:09:45.022 "block_size": 512, 00:09:45.022 "num_blocks": 65536, 00:09:45.022 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:45.022 "assigned_rate_limits": { 00:09:45.022 "rw_ios_per_sec": 0, 00:09:45.022 "rw_mbytes_per_sec": 0, 00:09:45.022 "r_mbytes_per_sec": 0, 00:09:45.022 "w_mbytes_per_sec": 0 00:09:45.022 }, 00:09:45.022 "claimed": false, 00:09:45.022 "zoned": false, 00:09:45.022 "supported_io_types": { 00:09:45.022 "read": true, 00:09:45.022 "write": true, 00:09:45.022 "unmap": true, 00:09:45.022 "flush": true, 00:09:45.022 "reset": true, 00:09:45.022 "nvme_admin": false, 00:09:45.022 "nvme_io": false, 00:09:45.022 "nvme_io_md": false, 00:09:45.022 "write_zeroes": true, 00:09:45.022 "zcopy": true, 00:09:45.022 "get_zone_info": false, 00:09:45.022 "zone_management": false, 00:09:45.022 "zone_append": false, 00:09:45.022 "compare": false, 00:09:45.022 "compare_and_write": false, 00:09:45.022 "abort": true, 00:09:45.022 "seek_hole": false, 00:09:45.022 "seek_data": false, 00:09:45.022 "copy": true, 00:09:45.022 "nvme_iov_md": false 00:09:45.022 }, 00:09:45.022 "memory_domains": [ 00:09:45.022 { 00:09:45.022 "dma_device_id": "system", 00:09:45.022 "dma_device_type": 1 00:09:45.022 }, 00:09:45.022 { 00:09:45.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.022 "dma_device_type": 2 00:09:45.022 } 00:09:45.022 ], 00:09:45.022 "driver_specific": {} 00:09:45.022 } 00:09:45.022 ] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 [2024-11-28 18:50:14.568956] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.022 [2024-11-28 18:50:14.569050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.022 [2024-11-28 18:50:14.569098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.022 [2024-11-28 18:50:14.570877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.022 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.023 "name": "Existed_Raid", 00:09:45.023 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:45.023 "strip_size_kb": 0, 00:09:45.023 "state": "configuring", 00:09:45.023 "raid_level": "raid1", 00:09:45.023 "superblock": true, 00:09:45.023 "num_base_bdevs": 3, 00:09:45.023 "num_base_bdevs_discovered": 2, 00:09:45.023 "num_base_bdevs_operational": 3, 00:09:45.023 "base_bdevs_list": [ 00:09:45.023 { 00:09:45.023 "name": "BaseBdev1", 00:09:45.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.023 "is_configured": false, 00:09:45.023 "data_offset": 0, 00:09:45.023 "data_size": 0 00:09:45.023 }, 00:09:45.023 { 00:09:45.023 "name": "BaseBdev2", 00:09:45.023 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:45.023 "is_configured": true, 00:09:45.023 "data_offset": 2048, 00:09:45.023 "data_size": 63488 00:09:45.023 }, 00:09:45.023 { 00:09:45.023 "name": "BaseBdev3", 00:09:45.023 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:45.023 "is_configured": true, 00:09:45.023 "data_offset": 2048, 00:09:45.023 "data_size": 63488 00:09:45.023 } 00:09:45.023 ] 00:09:45.023 }' 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.023 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.591 18:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:45.591 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.591 18:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.591 [2024-11-28 18:50:15.005066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.591 "name": "Existed_Raid", 00:09:45.591 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:45.591 "strip_size_kb": 0, 00:09:45.591 "state": "configuring", 00:09:45.591 "raid_level": "raid1", 00:09:45.591 "superblock": true, 00:09:45.591 "num_base_bdevs": 3, 00:09:45.591 "num_base_bdevs_discovered": 1, 00:09:45.591 "num_base_bdevs_operational": 3, 00:09:45.591 "base_bdevs_list": [ 00:09:45.591 { 00:09:45.591 "name": "BaseBdev1", 00:09:45.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.591 "is_configured": false, 00:09:45.591 "data_offset": 0, 00:09:45.591 "data_size": 0 00:09:45.591 }, 00:09:45.591 { 00:09:45.591 "name": null, 00:09:45.591 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:45.591 "is_configured": false, 00:09:45.591 "data_offset": 0, 00:09:45.591 "data_size": 63488 00:09:45.591 }, 00:09:45.591 { 00:09:45.591 "name": "BaseBdev3", 00:09:45.591 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:45.591 "is_configured": true, 00:09:45.591 "data_offset": 2048, 00:09:45.591 "data_size": 63488 00:09:45.591 } 00:09:45.591 ] 00:09:45.591 }' 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.591 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 [2024-11-28 18:50:15.548047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.159 BaseBdev1 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 [ 00:09:46.159 { 00:09:46.159 "name": "BaseBdev1", 00:09:46.159 "aliases": [ 00:09:46.159 "0c2c674e-72ab-4104-b681-62f8e06661da" 00:09:46.159 ], 00:09:46.159 "product_name": "Malloc disk", 00:09:46.159 "block_size": 512, 00:09:46.159 "num_blocks": 65536, 00:09:46.159 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:46.159 "assigned_rate_limits": { 00:09:46.159 "rw_ios_per_sec": 0, 00:09:46.159 "rw_mbytes_per_sec": 0, 00:09:46.159 "r_mbytes_per_sec": 0, 00:09:46.159 "w_mbytes_per_sec": 0 00:09:46.159 }, 00:09:46.159 "claimed": true, 00:09:46.159 "claim_type": "exclusive_write", 00:09:46.159 "zoned": false, 00:09:46.159 "supported_io_types": { 00:09:46.159 "read": true, 00:09:46.159 "write": true, 00:09:46.159 "unmap": true, 00:09:46.159 "flush": true, 00:09:46.159 "reset": true, 00:09:46.159 "nvme_admin": false, 00:09:46.159 "nvme_io": false, 00:09:46.159 "nvme_io_md": false, 00:09:46.159 "write_zeroes": true, 00:09:46.159 "zcopy": true, 00:09:46.159 "get_zone_info": false, 00:09:46.159 "zone_management": false, 00:09:46.159 "zone_append": false, 00:09:46.159 "compare": false, 00:09:46.159 "compare_and_write": false, 00:09:46.159 "abort": true, 00:09:46.159 "seek_hole": false, 00:09:46.159 "seek_data": false, 00:09:46.159 "copy": true, 00:09:46.159 "nvme_iov_md": false 00:09:46.159 }, 00:09:46.159 "memory_domains": [ 00:09:46.159 { 00:09:46.159 "dma_device_id": "system", 00:09:46.159 "dma_device_type": 1 00:09:46.159 }, 00:09:46.159 { 00:09:46.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.159 "dma_device_type": 2 00:09:46.159 } 00:09:46.159 ], 00:09:46.159 "driver_specific": {} 00:09:46.159 } 00:09:46.159 ] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.159 "name": "Existed_Raid", 00:09:46.159 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:46.159 "strip_size_kb": 0, 00:09:46.159 "state": "configuring", 00:09:46.159 "raid_level": "raid1", 00:09:46.159 "superblock": true, 00:09:46.159 "num_base_bdevs": 3, 00:09:46.159 "num_base_bdevs_discovered": 2, 00:09:46.159 "num_base_bdevs_operational": 3, 00:09:46.159 "base_bdevs_list": [ 00:09:46.159 { 00:09:46.159 "name": "BaseBdev1", 00:09:46.159 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:46.159 "is_configured": true, 00:09:46.159 "data_offset": 2048, 00:09:46.159 "data_size": 63488 00:09:46.159 }, 00:09:46.159 { 00:09:46.159 "name": null, 00:09:46.159 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:46.159 "is_configured": false, 00:09:46.159 "data_offset": 0, 00:09:46.159 "data_size": 63488 00:09:46.159 }, 00:09:46.159 { 00:09:46.159 "name": "BaseBdev3", 00:09:46.159 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:46.159 "is_configured": true, 00:09:46.159 "data_offset": 2048, 00:09:46.159 "data_size": 63488 00:09:46.159 } 00:09:46.159 ] 00:09:46.159 }' 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.159 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.419 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.419 18:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.419 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.419 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.419 18:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.419 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:46.419 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:46.419 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.419 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.679 [2024-11-28 18:50:16.028253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.679 "name": "Existed_Raid", 00:09:46.679 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:46.679 "strip_size_kb": 0, 00:09:46.679 "state": "configuring", 00:09:46.679 "raid_level": "raid1", 00:09:46.679 "superblock": true, 00:09:46.679 "num_base_bdevs": 3, 00:09:46.679 "num_base_bdevs_discovered": 1, 00:09:46.679 "num_base_bdevs_operational": 3, 00:09:46.679 "base_bdevs_list": [ 00:09:46.679 { 00:09:46.679 "name": "BaseBdev1", 00:09:46.679 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:46.679 "is_configured": true, 00:09:46.679 "data_offset": 2048, 00:09:46.679 "data_size": 63488 00:09:46.679 }, 00:09:46.679 { 00:09:46.679 "name": null, 00:09:46.679 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:46.679 "is_configured": false, 00:09:46.679 "data_offset": 0, 00:09:46.679 "data_size": 63488 00:09:46.679 }, 00:09:46.679 { 00:09:46.679 "name": null, 00:09:46.679 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:46.679 "is_configured": false, 00:09:46.679 "data_offset": 0, 00:09:46.679 "data_size": 63488 00:09:46.679 } 00:09:46.679 ] 00:09:46.679 }' 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.679 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.939 [2024-11-28 18:50:16.516409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.939 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.199 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.199 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.199 "name": "Existed_Raid", 00:09:47.199 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:47.199 "strip_size_kb": 0, 00:09:47.199 "state": "configuring", 00:09:47.199 "raid_level": "raid1", 00:09:47.199 "superblock": true, 00:09:47.199 "num_base_bdevs": 3, 00:09:47.199 "num_base_bdevs_discovered": 2, 00:09:47.199 "num_base_bdevs_operational": 3, 00:09:47.199 "base_bdevs_list": [ 00:09:47.199 { 00:09:47.199 "name": "BaseBdev1", 00:09:47.199 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:47.199 "is_configured": true, 00:09:47.199 "data_offset": 2048, 00:09:47.199 "data_size": 63488 00:09:47.199 }, 00:09:47.199 { 00:09:47.199 "name": null, 00:09:47.199 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:47.199 "is_configured": false, 00:09:47.199 "data_offset": 0, 00:09:47.199 "data_size": 63488 00:09:47.199 }, 00:09:47.199 { 00:09:47.199 "name": "BaseBdev3", 00:09:47.199 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:47.199 "is_configured": true, 00:09:47.199 "data_offset": 2048, 00:09:47.199 "data_size": 63488 00:09:47.199 } 00:09:47.199 ] 00:09:47.199 }' 00:09:47.199 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.199 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.459 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.459 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.459 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.459 18:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.459 18:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.459 [2024-11-28 18:50:17.032565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.459 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.718 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.718 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.718 "name": "Existed_Raid", 00:09:47.718 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:47.718 "strip_size_kb": 0, 00:09:47.718 "state": "configuring", 00:09:47.718 "raid_level": "raid1", 00:09:47.718 "superblock": true, 00:09:47.718 "num_base_bdevs": 3, 00:09:47.719 "num_base_bdevs_discovered": 1, 00:09:47.719 "num_base_bdevs_operational": 3, 00:09:47.719 "base_bdevs_list": [ 00:09:47.719 { 00:09:47.719 "name": null, 00:09:47.719 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:47.719 "is_configured": false, 00:09:47.719 "data_offset": 0, 00:09:47.719 "data_size": 63488 00:09:47.719 }, 00:09:47.719 { 00:09:47.719 "name": null, 00:09:47.719 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:47.719 "is_configured": false, 00:09:47.719 "data_offset": 0, 00:09:47.719 "data_size": 63488 00:09:47.719 }, 00:09:47.719 { 00:09:47.719 "name": "BaseBdev3", 00:09:47.719 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:47.719 "is_configured": true, 00:09:47.719 "data_offset": 2048, 00:09:47.719 "data_size": 63488 00:09:47.719 } 00:09:47.719 ] 00:09:47.719 }' 00:09:47.719 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.719 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.978 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.979 [2024-11-28 18:50:17.535097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.979 "name": "Existed_Raid", 00:09:47.979 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:47.979 "strip_size_kb": 0, 00:09:47.979 "state": "configuring", 00:09:47.979 "raid_level": "raid1", 00:09:47.979 "superblock": true, 00:09:47.979 "num_base_bdevs": 3, 00:09:47.979 "num_base_bdevs_discovered": 2, 00:09:47.979 "num_base_bdevs_operational": 3, 00:09:47.979 "base_bdevs_list": [ 00:09:47.979 { 00:09:47.979 "name": null, 00:09:47.979 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:47.979 "is_configured": false, 00:09:47.979 "data_offset": 0, 00:09:47.979 "data_size": 63488 00:09:47.979 }, 00:09:47.979 { 00:09:47.979 "name": "BaseBdev2", 00:09:47.979 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:47.979 "is_configured": true, 00:09:47.979 "data_offset": 2048, 00:09:47.979 "data_size": 63488 00:09:47.979 }, 00:09:47.979 { 00:09:47.979 "name": "BaseBdev3", 00:09:47.979 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:47.979 "is_configured": true, 00:09:47.979 "data_offset": 2048, 00:09:47.979 "data_size": 63488 00:09:47.979 } 00:09:47.979 ] 00:09:47.979 }' 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.979 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 18:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c2c674e-72ab-4104-b681-62f8e06661da 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 NewBaseBdev 00:09:48.549 [2024-11-28 18:50:18.058097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:48.549 [2024-11-28 18:50:18.058281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.549 [2024-11-28 18:50:18.058297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.549 [2024-11-28 18:50:18.058541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:48.549 [2024-11-28 18:50:18.058672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.549 [2024-11-28 18:50:18.058680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:48.549 [2024-11-28 18:50:18.058775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.549 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.550 [ 00:09:48.550 { 00:09:48.550 "name": "NewBaseBdev", 00:09:48.550 "aliases": [ 00:09:48.550 "0c2c674e-72ab-4104-b681-62f8e06661da" 00:09:48.550 ], 00:09:48.550 "product_name": "Malloc disk", 00:09:48.550 "block_size": 512, 00:09:48.550 "num_blocks": 65536, 00:09:48.550 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:48.550 "assigned_rate_limits": { 00:09:48.550 "rw_ios_per_sec": 0, 00:09:48.550 "rw_mbytes_per_sec": 0, 00:09:48.550 "r_mbytes_per_sec": 0, 00:09:48.550 "w_mbytes_per_sec": 0 00:09:48.550 }, 00:09:48.550 "claimed": true, 00:09:48.550 "claim_type": "exclusive_write", 00:09:48.550 "zoned": false, 00:09:48.550 "supported_io_types": { 00:09:48.550 "read": true, 00:09:48.550 "write": true, 00:09:48.550 "unmap": true, 00:09:48.550 "flush": true, 00:09:48.550 "reset": true, 00:09:48.550 "nvme_admin": false, 00:09:48.550 "nvme_io": false, 00:09:48.550 "nvme_io_md": false, 00:09:48.550 "write_zeroes": true, 00:09:48.550 "zcopy": true, 00:09:48.550 "get_zone_info": false, 00:09:48.550 "zone_management": false, 00:09:48.550 "zone_append": false, 00:09:48.550 "compare": false, 00:09:48.550 "compare_and_write": false, 00:09:48.550 "abort": true, 00:09:48.550 "seek_hole": false, 00:09:48.550 "seek_data": false, 00:09:48.550 "copy": true, 00:09:48.550 "nvme_iov_md": false 00:09:48.550 }, 00:09:48.550 "memory_domains": [ 00:09:48.550 { 00:09:48.550 "dma_device_id": "system", 00:09:48.550 "dma_device_type": 1 00:09:48.550 }, 00:09:48.550 { 00:09:48.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.550 "dma_device_type": 2 00:09:48.550 } 00:09:48.550 ], 00:09:48.550 "driver_specific": {} 00:09:48.550 } 00:09:48.550 ] 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.550 "name": "Existed_Raid", 00:09:48.550 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:48.550 "strip_size_kb": 0, 00:09:48.550 "state": "online", 00:09:48.550 "raid_level": "raid1", 00:09:48.550 "superblock": true, 00:09:48.550 "num_base_bdevs": 3, 00:09:48.550 "num_base_bdevs_discovered": 3, 00:09:48.550 "num_base_bdevs_operational": 3, 00:09:48.550 "base_bdevs_list": [ 00:09:48.550 { 00:09:48.550 "name": "NewBaseBdev", 00:09:48.550 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:48.550 "is_configured": true, 00:09:48.550 "data_offset": 2048, 00:09:48.550 "data_size": 63488 00:09:48.550 }, 00:09:48.550 { 00:09:48.550 "name": "BaseBdev2", 00:09:48.550 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:48.550 "is_configured": true, 00:09:48.550 "data_offset": 2048, 00:09:48.550 "data_size": 63488 00:09:48.550 }, 00:09:48.550 { 00:09:48.550 "name": "BaseBdev3", 00:09:48.550 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:48.550 "is_configured": true, 00:09:48.550 "data_offset": 2048, 00:09:48.550 "data_size": 63488 00:09:48.550 } 00:09:48.550 ] 00:09:48.550 }' 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.550 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.119 [2024-11-28 18:50:18.562580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.119 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.119 "name": "Existed_Raid", 00:09:49.119 "aliases": [ 00:09:49.119 "d61271f7-18c9-4766-8bf6-a14becc7f5dc" 00:09:49.119 ], 00:09:49.119 "product_name": "Raid Volume", 00:09:49.119 "block_size": 512, 00:09:49.119 "num_blocks": 63488, 00:09:49.119 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:49.119 "assigned_rate_limits": { 00:09:49.119 "rw_ios_per_sec": 0, 00:09:49.119 "rw_mbytes_per_sec": 0, 00:09:49.119 "r_mbytes_per_sec": 0, 00:09:49.119 "w_mbytes_per_sec": 0 00:09:49.119 }, 00:09:49.119 "claimed": false, 00:09:49.119 "zoned": false, 00:09:49.119 "supported_io_types": { 00:09:49.119 "read": true, 00:09:49.119 "write": true, 00:09:49.119 "unmap": false, 00:09:49.119 "flush": false, 00:09:49.119 "reset": true, 00:09:49.119 "nvme_admin": false, 00:09:49.119 "nvme_io": false, 00:09:49.119 "nvme_io_md": false, 00:09:49.119 "write_zeroes": true, 00:09:49.119 "zcopy": false, 00:09:49.119 "get_zone_info": false, 00:09:49.119 "zone_management": false, 00:09:49.119 "zone_append": false, 00:09:49.119 "compare": false, 00:09:49.119 "compare_and_write": false, 00:09:49.119 "abort": false, 00:09:49.119 "seek_hole": false, 00:09:49.119 "seek_data": false, 00:09:49.119 "copy": false, 00:09:49.119 "nvme_iov_md": false 00:09:49.120 }, 00:09:49.120 "memory_domains": [ 00:09:49.120 { 00:09:49.120 "dma_device_id": "system", 00:09:49.120 "dma_device_type": 1 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.120 "dma_device_type": 2 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "dma_device_id": "system", 00:09:49.120 "dma_device_type": 1 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.120 "dma_device_type": 2 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "dma_device_id": "system", 00:09:49.120 "dma_device_type": 1 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.120 "dma_device_type": 2 00:09:49.120 } 00:09:49.120 ], 00:09:49.120 "driver_specific": { 00:09:49.120 "raid": { 00:09:49.120 "uuid": "d61271f7-18c9-4766-8bf6-a14becc7f5dc", 00:09:49.120 "strip_size_kb": 0, 00:09:49.120 "state": "online", 00:09:49.120 "raid_level": "raid1", 00:09:49.120 "superblock": true, 00:09:49.120 "num_base_bdevs": 3, 00:09:49.120 "num_base_bdevs_discovered": 3, 00:09:49.120 "num_base_bdevs_operational": 3, 00:09:49.120 "base_bdevs_list": [ 00:09:49.120 { 00:09:49.120 "name": "NewBaseBdev", 00:09:49.120 "uuid": "0c2c674e-72ab-4104-b681-62f8e06661da", 00:09:49.120 "is_configured": true, 00:09:49.120 "data_offset": 2048, 00:09:49.120 "data_size": 63488 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "name": "BaseBdev2", 00:09:49.120 "uuid": "2eb1d1a5-171e-477e-a9f6-5908132f8344", 00:09:49.120 "is_configured": true, 00:09:49.120 "data_offset": 2048, 00:09:49.120 "data_size": 63488 00:09:49.120 }, 00:09:49.120 { 00:09:49.120 "name": "BaseBdev3", 00:09:49.120 "uuid": "68d3e2fe-fe24-4e23-a153-e51e853c8d2e", 00:09:49.120 "is_configured": true, 00:09:49.120 "data_offset": 2048, 00:09:49.120 "data_size": 63488 00:09:49.120 } 00:09:49.120 ] 00:09:49.120 } 00:09:49.120 } 00:09:49.120 }' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:49.120 BaseBdev2 00:09:49.120 BaseBdev3' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.120 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.379 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.379 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.379 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.379 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.380 [2024-11-28 18:50:18.830335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.380 [2024-11-28 18:50:18.830360] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.380 [2024-11-28 18:50:18.830442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.380 [2024-11-28 18:50:18.830681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.380 [2024-11-28 18:50:18.830693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80582 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80582 ']' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80582 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80582 00:09:49.380 killing process with pid 80582 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80582' 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80582 00:09:49.380 [2024-11-28 18:50:18.873087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.380 18:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80582 00:09:49.380 [2024-11-28 18:50:18.903908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.639 18:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.639 ************************************ 00:09:49.639 END TEST raid_state_function_test_sb 00:09:49.639 ************************************ 00:09:49.639 00:09:49.639 real 0m8.756s 00:09:49.639 user 0m14.986s 00:09:49.639 sys 0m1.696s 00:09:49.639 18:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.639 18:50:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.639 18:50:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:49.639 18:50:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:49.639 18:50:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.639 18:50:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.639 ************************************ 00:09:49.639 START TEST raid_superblock_test 00:09:49.639 ************************************ 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81180 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81180 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81180 ']' 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.639 18:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.898 [2024-11-28 18:50:19.285633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:49.898 [2024-11-28 18:50:19.285820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81180 ] 00:09:49.898 [2024-11-28 18:50:19.420386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:49.898 [2024-11-28 18:50:19.456066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.898 [2024-11-28 18:50:19.480628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.158 [2024-11-28 18:50:19.522824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.158 [2024-11-28 18:50:19.522859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 malloc1 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 [2024-11-28 18:50:20.126687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.727 [2024-11-28 18:50:20.126801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.727 [2024-11-28 18:50:20.126842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.727 [2024-11-28 18:50:20.126895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.727 [2024-11-28 18:50:20.128974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.727 [2024-11-28 18:50:20.129045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.727 pt1 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 malloc2 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 [2024-11-28 18:50:20.159023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.727 [2024-11-28 18:50:20.159071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.727 [2024-11-28 18:50:20.159089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:50.727 [2024-11-28 18:50:20.159097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.727 [2024-11-28 18:50:20.161102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.727 [2024-11-28 18:50:20.161148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.727 pt2 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 malloc3 00:09:50.727 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.728 [2024-11-28 18:50:20.187409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.728 [2024-11-28 18:50:20.187507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.728 [2024-11-28 18:50:20.187546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:50.728 [2024-11-28 18:50:20.187575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.728 [2024-11-28 18:50:20.189574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.728 [2024-11-28 18:50:20.189640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.728 pt3 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.728 [2024-11-28 18:50:20.199458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.728 [2024-11-28 18:50:20.201237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.728 [2024-11-28 18:50:20.201349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.728 [2024-11-28 18:50:20.201533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:50.728 [2024-11-28 18:50:20.201581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.728 [2024-11-28 18:50:20.201859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:50.728 [2024-11-28 18:50:20.202040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:50.728 [2024-11-28 18:50:20.202088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:50.728 [2024-11-28 18:50:20.202241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.728 "name": "raid_bdev1", 00:09:50.728 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:50.728 "strip_size_kb": 0, 00:09:50.728 "state": "online", 00:09:50.728 "raid_level": "raid1", 00:09:50.728 "superblock": true, 00:09:50.728 "num_base_bdevs": 3, 00:09:50.728 "num_base_bdevs_discovered": 3, 00:09:50.728 "num_base_bdevs_operational": 3, 00:09:50.728 "base_bdevs_list": [ 00:09:50.728 { 00:09:50.728 "name": "pt1", 00:09:50.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.728 "is_configured": true, 00:09:50.728 "data_offset": 2048, 00:09:50.728 "data_size": 63488 00:09:50.728 }, 00:09:50.728 { 00:09:50.728 "name": "pt2", 00:09:50.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.728 "is_configured": true, 00:09:50.728 "data_offset": 2048, 00:09:50.728 "data_size": 63488 00:09:50.728 }, 00:09:50.728 { 00:09:50.728 "name": "pt3", 00:09:50.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.728 "is_configured": true, 00:09:50.728 "data_offset": 2048, 00:09:50.728 "data_size": 63488 00:09:50.728 } 00:09:50.728 ] 00:09:50.728 }' 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.728 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 [2024-11-28 18:50:20.643829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.298 "name": "raid_bdev1", 00:09:51.298 "aliases": [ 00:09:51.298 "9a30ba75-9f2d-4fbf-96ae-ddce20e81716" 00:09:51.298 ], 00:09:51.298 "product_name": "Raid Volume", 00:09:51.298 "block_size": 512, 00:09:51.298 "num_blocks": 63488, 00:09:51.298 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:51.298 "assigned_rate_limits": { 00:09:51.298 "rw_ios_per_sec": 0, 00:09:51.298 "rw_mbytes_per_sec": 0, 00:09:51.298 "r_mbytes_per_sec": 0, 00:09:51.298 "w_mbytes_per_sec": 0 00:09:51.298 }, 00:09:51.298 "claimed": false, 00:09:51.298 "zoned": false, 00:09:51.298 "supported_io_types": { 00:09:51.298 "read": true, 00:09:51.298 "write": true, 00:09:51.298 "unmap": false, 00:09:51.298 "flush": false, 00:09:51.298 "reset": true, 00:09:51.298 "nvme_admin": false, 00:09:51.298 "nvme_io": false, 00:09:51.298 "nvme_io_md": false, 00:09:51.298 "write_zeroes": true, 00:09:51.298 "zcopy": false, 00:09:51.298 "get_zone_info": false, 00:09:51.298 "zone_management": false, 00:09:51.298 "zone_append": false, 00:09:51.298 "compare": false, 00:09:51.298 "compare_and_write": false, 00:09:51.298 "abort": false, 00:09:51.298 "seek_hole": false, 00:09:51.298 "seek_data": false, 00:09:51.298 "copy": false, 00:09:51.298 "nvme_iov_md": false 00:09:51.298 }, 00:09:51.298 "memory_domains": [ 00:09:51.298 { 00:09:51.298 "dma_device_id": "system", 00:09:51.298 "dma_device_type": 1 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.298 "dma_device_type": 2 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "dma_device_id": "system", 00:09:51.298 "dma_device_type": 1 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.298 "dma_device_type": 2 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "dma_device_id": "system", 00:09:51.298 "dma_device_type": 1 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.298 "dma_device_type": 2 00:09:51.298 } 00:09:51.298 ], 00:09:51.298 "driver_specific": { 00:09:51.298 "raid": { 00:09:51.298 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:51.298 "strip_size_kb": 0, 00:09:51.298 "state": "online", 00:09:51.298 "raid_level": "raid1", 00:09:51.298 "superblock": true, 00:09:51.298 "num_base_bdevs": 3, 00:09:51.298 "num_base_bdevs_discovered": 3, 00:09:51.298 "num_base_bdevs_operational": 3, 00:09:51.298 "base_bdevs_list": [ 00:09:51.298 { 00:09:51.298 "name": "pt1", 00:09:51.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.298 "is_configured": true, 00:09:51.298 "data_offset": 2048, 00:09:51.298 "data_size": 63488 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "name": "pt2", 00:09:51.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.298 "is_configured": true, 00:09:51.298 "data_offset": 2048, 00:09:51.298 "data_size": 63488 00:09:51.298 }, 00:09:51.298 { 00:09:51.298 "name": "pt3", 00:09:51.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.298 "is_configured": true, 00:09:51.298 "data_offset": 2048, 00:09:51.298 "data_size": 63488 00:09:51.298 } 00:09:51.298 ] 00:09:51.298 } 00:09:51.298 } 00:09:51.298 }' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.298 pt2 00:09:51.298 pt3' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:51.298 [2024-11-28 18:50:20.883860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.298 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a30ba75-9f2d-4fbf-96ae-ddce20e81716 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a30ba75-9f2d-4fbf-96ae-ddce20e81716 ']' 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 [2024-11-28 18:50:20.931603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.559 [2024-11-28 18:50:20.931628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.559 [2024-11-28 18:50:20.931705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.559 [2024-11-28 18:50:20.931778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.559 [2024-11-28 18:50:20.931788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 [2024-11-28 18:50:21.079673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:51.559 [2024-11-28 18:50:21.081491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:51.559 [2024-11-28 18:50:21.081545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:51.559 [2024-11-28 18:50:21.081591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:51.559 [2024-11-28 18:50:21.081634] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:51.559 [2024-11-28 18:50:21.081650] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:51.559 [2024-11-28 18:50:21.081664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.559 [2024-11-28 18:50:21.081672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:51.559 request: 00:09:51.559 { 00:09:51.559 "name": "raid_bdev1", 00:09:51.559 "raid_level": "raid1", 00:09:51.559 "base_bdevs": [ 00:09:51.559 "malloc1", 00:09:51.559 "malloc2", 00:09:51.559 "malloc3" 00:09:51.559 ], 00:09:51.559 "superblock": false, 00:09:51.559 "method": "bdev_raid_create", 00:09:51.559 "req_id": 1 00:09:51.559 } 00:09:51.559 Got JSON-RPC error response 00:09:51.559 response: 00:09:51.559 { 00:09:51.559 "code": -17, 00:09:51.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:51.559 } 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.559 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 [2024-11-28 18:50:21.139657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.559 [2024-11-28 18:50:21.139742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.559 [2024-11-28 18:50:21.139775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:51.559 [2024-11-28 18:50:21.139812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.560 [2024-11-28 18:50:21.141840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.560 [2024-11-28 18:50:21.141907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.560 [2024-11-28 18:50:21.142015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.560 [2024-11-28 18:50:21.142073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.560 pt1 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.560 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.819 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.819 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.819 "name": "raid_bdev1", 00:09:51.819 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:51.819 "strip_size_kb": 0, 00:09:51.819 "state": "configuring", 00:09:51.819 "raid_level": "raid1", 00:09:51.819 "superblock": true, 00:09:51.819 "num_base_bdevs": 3, 00:09:51.819 "num_base_bdevs_discovered": 1, 00:09:51.819 "num_base_bdevs_operational": 3, 00:09:51.819 "base_bdevs_list": [ 00:09:51.819 { 00:09:51.819 "name": "pt1", 00:09:51.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.819 "is_configured": true, 00:09:51.819 "data_offset": 2048, 00:09:51.819 "data_size": 63488 00:09:51.819 }, 00:09:51.819 { 00:09:51.819 "name": null, 00:09:51.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.819 "is_configured": false, 00:09:51.819 "data_offset": 2048, 00:09:51.819 "data_size": 63488 00:09:51.819 }, 00:09:51.819 { 00:09:51.819 "name": null, 00:09:51.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.819 "is_configured": false, 00:09:51.819 "data_offset": 2048, 00:09:51.819 "data_size": 63488 00:09:51.819 } 00:09:51.819 ] 00:09:51.819 }' 00:09:51.819 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.819 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 [2024-11-28 18:50:21.587806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.079 [2024-11-28 18:50:21.587864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.079 [2024-11-28 18:50:21.587888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:52.079 [2024-11-28 18:50:21.587897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.079 [2024-11-28 18:50:21.588267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.079 [2024-11-28 18:50:21.588284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.079 [2024-11-28 18:50:21.588350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.079 [2024-11-28 18:50:21.588369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.079 pt2 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 [2024-11-28 18:50:21.595840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.079 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.079 "name": "raid_bdev1", 00:09:52.079 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:52.079 "strip_size_kb": 0, 00:09:52.079 "state": "configuring", 00:09:52.079 "raid_level": "raid1", 00:09:52.079 "superblock": true, 00:09:52.079 "num_base_bdevs": 3, 00:09:52.079 "num_base_bdevs_discovered": 1, 00:09:52.079 "num_base_bdevs_operational": 3, 00:09:52.079 "base_bdevs_list": [ 00:09:52.079 { 00:09:52.079 "name": "pt1", 00:09:52.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.079 "is_configured": true, 00:09:52.079 "data_offset": 2048, 00:09:52.079 "data_size": 63488 00:09:52.079 }, 00:09:52.079 { 00:09:52.079 "name": null, 00:09:52.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.079 "is_configured": false, 00:09:52.079 "data_offset": 0, 00:09:52.079 "data_size": 63488 00:09:52.079 }, 00:09:52.079 { 00:09:52.079 "name": null, 00:09:52.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.079 "is_configured": false, 00:09:52.080 "data_offset": 2048, 00:09:52.080 "data_size": 63488 00:09:52.080 } 00:09:52.080 ] 00:09:52.080 }' 00:09:52.080 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.080 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 [2024-11-28 18:50:22.059949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.650 [2024-11-28 18:50:22.060025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.650 [2024-11-28 18:50:22.060046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:52.650 [2024-11-28 18:50:22.060068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.650 [2024-11-28 18:50:22.060490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.650 [2024-11-28 18:50:22.060510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.650 [2024-11-28 18:50:22.060578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.650 [2024-11-28 18:50:22.060599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.650 pt2 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 [2024-11-28 18:50:22.071915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.650 [2024-11-28 18:50:22.071964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.650 [2024-11-28 18:50:22.071977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.650 [2024-11-28 18:50:22.071987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.650 [2024-11-28 18:50:22.072287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.650 [2024-11-28 18:50:22.072305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.650 [2024-11-28 18:50:22.072364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.650 [2024-11-28 18:50:22.072382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.650 [2024-11-28 18:50:22.072477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:52.650 [2024-11-28 18:50:22.072488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.650 [2024-11-28 18:50:22.072755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:52.650 [2024-11-28 18:50:22.072872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:52.650 [2024-11-28 18:50:22.072881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:52.650 [2024-11-28 18:50:22.072976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.650 pt3 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.650 "name": "raid_bdev1", 00:09:52.650 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:52.650 "strip_size_kb": 0, 00:09:52.650 "state": "online", 00:09:52.650 "raid_level": "raid1", 00:09:52.650 "superblock": true, 00:09:52.650 "num_base_bdevs": 3, 00:09:52.650 "num_base_bdevs_discovered": 3, 00:09:52.650 "num_base_bdevs_operational": 3, 00:09:52.650 "base_bdevs_list": [ 00:09:52.650 { 00:09:52.650 "name": "pt1", 00:09:52.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.650 "is_configured": true, 00:09:52.650 "data_offset": 2048, 00:09:52.650 "data_size": 63488 00:09:52.650 }, 00:09:52.650 { 00:09:52.650 "name": "pt2", 00:09:52.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.650 "is_configured": true, 00:09:52.650 "data_offset": 2048, 00:09:52.650 "data_size": 63488 00:09:52.650 }, 00:09:52.650 { 00:09:52.650 "name": "pt3", 00:09:52.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.650 "is_configured": true, 00:09:52.650 "data_offset": 2048, 00:09:52.650 "data_size": 63488 00:09:52.650 } 00:09:52.650 ] 00:09:52.650 }' 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.650 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.234 [2024-11-28 18:50:22.544339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.234 "name": "raid_bdev1", 00:09:53.234 "aliases": [ 00:09:53.234 "9a30ba75-9f2d-4fbf-96ae-ddce20e81716" 00:09:53.234 ], 00:09:53.234 "product_name": "Raid Volume", 00:09:53.234 "block_size": 512, 00:09:53.234 "num_blocks": 63488, 00:09:53.234 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:53.234 "assigned_rate_limits": { 00:09:53.234 "rw_ios_per_sec": 0, 00:09:53.234 "rw_mbytes_per_sec": 0, 00:09:53.234 "r_mbytes_per_sec": 0, 00:09:53.234 "w_mbytes_per_sec": 0 00:09:53.234 }, 00:09:53.234 "claimed": false, 00:09:53.234 "zoned": false, 00:09:53.234 "supported_io_types": { 00:09:53.234 "read": true, 00:09:53.234 "write": true, 00:09:53.234 "unmap": false, 00:09:53.234 "flush": false, 00:09:53.234 "reset": true, 00:09:53.234 "nvme_admin": false, 00:09:53.234 "nvme_io": false, 00:09:53.234 "nvme_io_md": false, 00:09:53.234 "write_zeroes": true, 00:09:53.234 "zcopy": false, 00:09:53.234 "get_zone_info": false, 00:09:53.234 "zone_management": false, 00:09:53.234 "zone_append": false, 00:09:53.234 "compare": false, 00:09:53.234 "compare_and_write": false, 00:09:53.234 "abort": false, 00:09:53.234 "seek_hole": false, 00:09:53.234 "seek_data": false, 00:09:53.234 "copy": false, 00:09:53.234 "nvme_iov_md": false 00:09:53.234 }, 00:09:53.234 "memory_domains": [ 00:09:53.234 { 00:09:53.234 "dma_device_id": "system", 00:09:53.234 "dma_device_type": 1 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.234 "dma_device_type": 2 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "dma_device_id": "system", 00:09:53.234 "dma_device_type": 1 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.234 "dma_device_type": 2 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "dma_device_id": "system", 00:09:53.234 "dma_device_type": 1 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.234 "dma_device_type": 2 00:09:53.234 } 00:09:53.234 ], 00:09:53.234 "driver_specific": { 00:09:53.234 "raid": { 00:09:53.234 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:53.234 "strip_size_kb": 0, 00:09:53.234 "state": "online", 00:09:53.234 "raid_level": "raid1", 00:09:53.234 "superblock": true, 00:09:53.234 "num_base_bdevs": 3, 00:09:53.234 "num_base_bdevs_discovered": 3, 00:09:53.234 "num_base_bdevs_operational": 3, 00:09:53.234 "base_bdevs_list": [ 00:09:53.234 { 00:09:53.234 "name": "pt1", 00:09:53.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.234 "is_configured": true, 00:09:53.234 "data_offset": 2048, 00:09:53.234 "data_size": 63488 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "name": "pt2", 00:09:53.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.234 "is_configured": true, 00:09:53.234 "data_offset": 2048, 00:09:53.234 "data_size": 63488 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "name": "pt3", 00:09:53.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.234 "is_configured": true, 00:09:53.234 "data_offset": 2048, 00:09:53.234 "data_size": 63488 00:09:53.234 } 00:09:53.234 ] 00:09:53.234 } 00:09:53.234 } 00:09:53.234 }' 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.234 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.234 pt2 00:09:53.234 pt3' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:53.235 [2024-11-28 18:50:22.796363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a30ba75-9f2d-4fbf-96ae-ddce20e81716 '!=' 9a30ba75-9f2d-4fbf-96ae-ddce20e81716 ']' 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.235 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.495 [2024-11-28 18:50:22.840161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.495 "name": "raid_bdev1", 00:09:53.495 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:53.495 "strip_size_kb": 0, 00:09:53.495 "state": "online", 00:09:53.495 "raid_level": "raid1", 00:09:53.495 "superblock": true, 00:09:53.495 "num_base_bdevs": 3, 00:09:53.495 "num_base_bdevs_discovered": 2, 00:09:53.495 "num_base_bdevs_operational": 2, 00:09:53.495 "base_bdevs_list": [ 00:09:53.495 { 00:09:53.495 "name": null, 00:09:53.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.495 "is_configured": false, 00:09:53.495 "data_offset": 0, 00:09:53.495 "data_size": 63488 00:09:53.495 }, 00:09:53.495 { 00:09:53.495 "name": "pt2", 00:09:53.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.495 "is_configured": true, 00:09:53.495 "data_offset": 2048, 00:09:53.495 "data_size": 63488 00:09:53.495 }, 00:09:53.495 { 00:09:53.495 "name": "pt3", 00:09:53.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.495 "is_configured": true, 00:09:53.495 "data_offset": 2048, 00:09:53.495 "data_size": 63488 00:09:53.495 } 00:09:53.495 ] 00:09:53.495 }' 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.495 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 [2024-11-28 18:50:23.252238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.754 [2024-11-28 18:50:23.252307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.754 [2024-11-28 18:50:23.252393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.754 [2024-11-28 18:50:23.252479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.754 [2024-11-28 18:50:23.252526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.754 [2024-11-28 18:50:23.316254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.754 [2024-11-28 18:50:23.316304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.754 [2024-11-28 18:50:23.316320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:53.754 [2024-11-28 18:50:23.316330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.754 [2024-11-28 18:50:23.318411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.754 [2024-11-28 18:50:23.318532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.754 [2024-11-28 18:50:23.318606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:53.754 [2024-11-28 18:50:23.318652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.754 pt2 00:09:53.754 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.755 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.014 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.014 "name": "raid_bdev1", 00:09:54.014 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:54.014 "strip_size_kb": 0, 00:09:54.014 "state": "configuring", 00:09:54.014 "raid_level": "raid1", 00:09:54.014 "superblock": true, 00:09:54.014 "num_base_bdevs": 3, 00:09:54.014 "num_base_bdevs_discovered": 1, 00:09:54.014 "num_base_bdevs_operational": 2, 00:09:54.014 "base_bdevs_list": [ 00:09:54.014 { 00:09:54.014 "name": null, 00:09:54.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.014 "is_configured": false, 00:09:54.014 "data_offset": 2048, 00:09:54.014 "data_size": 63488 00:09:54.014 }, 00:09:54.014 { 00:09:54.014 "name": "pt2", 00:09:54.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.014 "is_configured": true, 00:09:54.014 "data_offset": 2048, 00:09:54.014 "data_size": 63488 00:09:54.014 }, 00:09:54.014 { 00:09:54.014 "name": null, 00:09:54.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.014 "is_configured": false, 00:09:54.014 "data_offset": 2048, 00:09:54.014 "data_size": 63488 00:09:54.014 } 00:09:54.014 ] 00:09:54.014 }' 00:09:54.014 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.014 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.273 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.273 [2024-11-28 18:50:23.784416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.273 [2024-11-28 18:50:23.784532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.273 [2024-11-28 18:50:23.784582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:54.273 [2024-11-28 18:50:23.784616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.273 [2024-11-28 18:50:23.785025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.273 [2024-11-28 18:50:23.785088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.273 [2024-11-28 18:50:23.785184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:54.273 [2024-11-28 18:50:23.785233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.273 [2024-11-28 18:50:23.785342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.273 [2024-11-28 18:50:23.785385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.274 [2024-11-28 18:50:23.785634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:54.274 [2024-11-28 18:50:23.785795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.274 [2024-11-28 18:50:23.785835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:54.274 [2024-11-28 18:50:23.785980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.274 pt3 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.274 "name": "raid_bdev1", 00:09:54.274 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:54.274 "strip_size_kb": 0, 00:09:54.274 "state": "online", 00:09:54.274 "raid_level": "raid1", 00:09:54.274 "superblock": true, 00:09:54.274 "num_base_bdevs": 3, 00:09:54.274 "num_base_bdevs_discovered": 2, 00:09:54.274 "num_base_bdevs_operational": 2, 00:09:54.274 "base_bdevs_list": [ 00:09:54.274 { 00:09:54.274 "name": null, 00:09:54.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.274 "is_configured": false, 00:09:54.274 "data_offset": 2048, 00:09:54.274 "data_size": 63488 00:09:54.274 }, 00:09:54.274 { 00:09:54.274 "name": "pt2", 00:09:54.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.274 "is_configured": true, 00:09:54.274 "data_offset": 2048, 00:09:54.274 "data_size": 63488 00:09:54.274 }, 00:09:54.274 { 00:09:54.274 "name": "pt3", 00:09:54.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.274 "is_configured": true, 00:09:54.274 "data_offset": 2048, 00:09:54.274 "data_size": 63488 00:09:54.274 } 00:09:54.274 ] 00:09:54.274 }' 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.274 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 [2024-11-28 18:50:24.232513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.843 [2024-11-28 18:50:24.232594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.843 [2024-11-28 18:50:24.232673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.843 [2024-11-28 18:50:24.232729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.843 [2024-11-28 18:50:24.232737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 [2024-11-28 18:50:24.308538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.843 [2024-11-28 18:50:24.308584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.843 [2024-11-28 18:50:24.308612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:54.843 [2024-11-28 18:50:24.308621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.843 [2024-11-28 18:50:24.310631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.843 [2024-11-28 18:50:24.310707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.843 [2024-11-28 18:50:24.310775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:54.843 [2024-11-28 18:50:24.310806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.843 [2024-11-28 18:50:24.310912] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:54.843 [2024-11-28 18:50:24.310923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.843 [2024-11-28 18:50:24.310938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:09:54.843 [2024-11-28 18:50:24.310977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.843 pt1 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.843 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.843 "name": "raid_bdev1", 00:09:54.843 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:54.843 "strip_size_kb": 0, 00:09:54.843 "state": "configuring", 00:09:54.843 "raid_level": "raid1", 00:09:54.843 "superblock": true, 00:09:54.843 "num_base_bdevs": 3, 00:09:54.843 "num_base_bdevs_discovered": 1, 00:09:54.843 "num_base_bdevs_operational": 2, 00:09:54.843 "base_bdevs_list": [ 00:09:54.843 { 00:09:54.843 "name": null, 00:09:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.843 "is_configured": false, 00:09:54.843 "data_offset": 2048, 00:09:54.843 "data_size": 63488 00:09:54.843 }, 00:09:54.843 { 00:09:54.843 "name": "pt2", 00:09:54.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.843 "is_configured": true, 00:09:54.843 "data_offset": 2048, 00:09:54.843 "data_size": 63488 00:09:54.843 }, 00:09:54.843 { 00:09:54.843 "name": null, 00:09:54.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.843 "is_configured": false, 00:09:54.844 "data_offset": 2048, 00:09:54.844 "data_size": 63488 00:09:54.844 } 00:09:54.844 ] 00:09:54.844 }' 00:09:54.844 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.844 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:55.103 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:55.103 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.362 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:55.362 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:55.362 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.362 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.362 [2024-11-28 18:50:24.744679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:55.362 [2024-11-28 18:50:24.744785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.362 [2024-11-28 18:50:24.744834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:55.362 [2024-11-28 18:50:24.744862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.362 [2024-11-28 18:50:24.745294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.362 [2024-11-28 18:50:24.745350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:55.362 [2024-11-28 18:50:24.745458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:55.362 [2024-11-28 18:50:24.745506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:55.362 [2024-11-28 18:50:24.745624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:55.362 [2024-11-28 18:50:24.745659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.362 [2024-11-28 18:50:24.745907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:09:55.362 [2024-11-28 18:50:24.746060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:55.362 [2024-11-28 18:50:24.746103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:55.362 [2024-11-28 18:50:24.746242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.362 pt3 00:09:55.362 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.363 "name": "raid_bdev1", 00:09:55.363 "uuid": "9a30ba75-9f2d-4fbf-96ae-ddce20e81716", 00:09:55.363 "strip_size_kb": 0, 00:09:55.363 "state": "online", 00:09:55.363 "raid_level": "raid1", 00:09:55.363 "superblock": true, 00:09:55.363 "num_base_bdevs": 3, 00:09:55.363 "num_base_bdevs_discovered": 2, 00:09:55.363 "num_base_bdevs_operational": 2, 00:09:55.363 "base_bdevs_list": [ 00:09:55.363 { 00:09:55.363 "name": null, 00:09:55.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.363 "is_configured": false, 00:09:55.363 "data_offset": 2048, 00:09:55.363 "data_size": 63488 00:09:55.363 }, 00:09:55.363 { 00:09:55.363 "name": "pt2", 00:09:55.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.363 "is_configured": true, 00:09:55.363 "data_offset": 2048, 00:09:55.363 "data_size": 63488 00:09:55.363 }, 00:09:55.363 { 00:09:55.363 "name": "pt3", 00:09:55.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.363 "is_configured": true, 00:09:55.363 "data_offset": 2048, 00:09:55.363 "data_size": 63488 00:09:55.363 } 00:09:55.363 ] 00:09:55.363 }' 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.363 18:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.633 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:55.633 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.633 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:55.633 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.895 [2024-11-28 18:50:25.281052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9a30ba75-9f2d-4fbf-96ae-ddce20e81716 '!=' 9a30ba75-9f2d-4fbf-96ae-ddce20e81716 ']' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81180 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81180 ']' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81180 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81180 00:09:55.895 killing process with pid 81180 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81180' 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81180 00:09:55.895 [2024-11-28 18:50:25.365676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.895 [2024-11-28 18:50:25.365748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.895 [2024-11-28 18:50:25.365804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.895 [2024-11-28 18:50:25.365816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:55.895 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81180 00:09:55.895 [2024-11-28 18:50:25.399055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.155 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:56.155 00:09:56.155 real 0m6.422s 00:09:56.155 user 0m10.844s 00:09:56.155 sys 0m1.270s 00:09:56.155 ************************************ 00:09:56.155 END TEST raid_superblock_test 00:09:56.155 ************************************ 00:09:56.155 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.155 18:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 18:50:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:56.155 18:50:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.155 18:50:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.155 18:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 ************************************ 00:09:56.155 START TEST raid_read_error_test 00:09:56.155 ************************************ 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MXQqlroIzA 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81619 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81619 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81619 ']' 00:09:56.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.155 18:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.415 [2024-11-28 18:50:25.788601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:56.415 [2024-11-28 18:50:25.788813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81619 ] 00:09:56.415 [2024-11-28 18:50:25.922569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:56.415 [2024-11-28 18:50:25.962698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.415 [2024-11-28 18:50:25.987562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.674 [2024-11-28 18:50:26.029821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.674 [2024-11-28 18:50:26.029858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 BaseBdev1_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 true 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 [2024-11-28 18:50:26.642007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.243 [2024-11-28 18:50:26.642067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.243 [2024-11-28 18:50:26.642085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.243 [2024-11-28 18:50:26.642096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.243 [2024-11-28 18:50:26.644143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.243 [2024-11-28 18:50:26.644183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.243 BaseBdev1 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 BaseBdev2_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 true 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 [2024-11-28 18:50:26.682451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.243 [2024-11-28 18:50:26.682494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.243 [2024-11-28 18:50:26.682509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.243 [2024-11-28 18:50:26.682518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.243 [2024-11-28 18:50:26.684484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.243 [2024-11-28 18:50:26.684575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.243 BaseBdev2 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.243 BaseBdev3_malloc 00:09:57.243 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.244 true 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.244 [2024-11-28 18:50:26.722767] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.244 [2024-11-28 18:50:26.722847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.244 [2024-11-28 18:50:26.722867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:57.244 [2024-11-28 18:50:26.722877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.244 [2024-11-28 18:50:26.724864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.244 [2024-11-28 18:50:26.724897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.244 BaseBdev3 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.244 [2024-11-28 18:50:26.734818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.244 [2024-11-28 18:50:26.736680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.244 [2024-11-28 18:50:26.736749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.244 [2024-11-28 18:50:26.736919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.244 [2024-11-28 18:50:26.736930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.244 [2024-11-28 18:50:26.737195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:57.244 [2024-11-28 18:50:26.737333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.244 [2024-11-28 18:50:26.737345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:57.244 [2024-11-28 18:50:26.737473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.244 "name": "raid_bdev1", 00:09:57.244 "uuid": "744b0bff-44cb-46f3-9799-c169ced3be7b", 00:09:57.244 "strip_size_kb": 0, 00:09:57.244 "state": "online", 00:09:57.244 "raid_level": "raid1", 00:09:57.244 "superblock": true, 00:09:57.244 "num_base_bdevs": 3, 00:09:57.244 "num_base_bdevs_discovered": 3, 00:09:57.244 "num_base_bdevs_operational": 3, 00:09:57.244 "base_bdevs_list": [ 00:09:57.244 { 00:09:57.244 "name": "BaseBdev1", 00:09:57.244 "uuid": "23bba9f5-4413-5a91-9784-115427849a0e", 00:09:57.244 "is_configured": true, 00:09:57.244 "data_offset": 2048, 00:09:57.244 "data_size": 63488 00:09:57.244 }, 00:09:57.244 { 00:09:57.244 "name": "BaseBdev2", 00:09:57.244 "uuid": "2a2e90a4-d7ee-52fa-b1da-d6afbd0a2c00", 00:09:57.244 "is_configured": true, 00:09:57.244 "data_offset": 2048, 00:09:57.244 "data_size": 63488 00:09:57.244 }, 00:09:57.244 { 00:09:57.244 "name": "BaseBdev3", 00:09:57.244 "uuid": "e2646182-b8e6-519a-99cc-de2a4f8a4f5f", 00:09:57.244 "is_configured": true, 00:09:57.244 "data_offset": 2048, 00:09:57.244 "data_size": 63488 00:09:57.244 } 00:09:57.244 ] 00:09:57.244 }' 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.244 18:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.812 18:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.812 18:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.812 [2024-11-28 18:50:27.227318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.751 "name": "raid_bdev1", 00:09:58.751 "uuid": "744b0bff-44cb-46f3-9799-c169ced3be7b", 00:09:58.751 "strip_size_kb": 0, 00:09:58.751 "state": "online", 00:09:58.751 "raid_level": "raid1", 00:09:58.751 "superblock": true, 00:09:58.751 "num_base_bdevs": 3, 00:09:58.751 "num_base_bdevs_discovered": 3, 00:09:58.751 "num_base_bdevs_operational": 3, 00:09:58.751 "base_bdevs_list": [ 00:09:58.751 { 00:09:58.751 "name": "BaseBdev1", 00:09:58.751 "uuid": "23bba9f5-4413-5a91-9784-115427849a0e", 00:09:58.751 "is_configured": true, 00:09:58.751 "data_offset": 2048, 00:09:58.751 "data_size": 63488 00:09:58.751 }, 00:09:58.751 { 00:09:58.751 "name": "BaseBdev2", 00:09:58.751 "uuid": "2a2e90a4-d7ee-52fa-b1da-d6afbd0a2c00", 00:09:58.751 "is_configured": true, 00:09:58.751 "data_offset": 2048, 00:09:58.751 "data_size": 63488 00:09:58.751 }, 00:09:58.751 { 00:09:58.751 "name": "BaseBdev3", 00:09:58.751 "uuid": "e2646182-b8e6-519a-99cc-de2a4f8a4f5f", 00:09:58.751 "is_configured": true, 00:09:58.751 "data_offset": 2048, 00:09:58.751 "data_size": 63488 00:09:58.751 } 00:09:58.751 ] 00:09:58.751 }' 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.751 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 [2024-11-28 18:50:28.600097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.011 [2024-11-28 18:50:28.600131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.011 [2024-11-28 18:50:28.602654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.011 [2024-11-28 18:50:28.602731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.011 [2024-11-28 18:50:28.602851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.011 [2024-11-28 18:50:28.602896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.011 { 00:09:59.011 "results": [ 00:09:59.011 { 00:09:59.011 "job": "raid_bdev1", 00:09:59.011 "core_mask": "0x1", 00:09:59.011 "workload": "randrw", 00:09:59.011 "percentage": 50, 00:09:59.011 "status": "finished", 00:09:59.011 "queue_depth": 1, 00:09:59.011 "io_size": 131072, 00:09:59.011 "runtime": 1.370974, 00:09:59.011 "iops": 14890.14379557891, 00:09:59.011 "mibps": 1861.2679744473637, 00:09:59.011 "io_failed": 0, 00:09:59.011 "io_timeout": 0, 00:09:59.011 "avg_latency_us": 64.65540629632518, 00:09:59.011 "min_latency_us": 22.424823498649, 00:09:59.011 "max_latency_us": 1406.6277346814259 00:09:59.011 } 00:09:59.011 ], 00:09:59.011 "core_count": 1 00:09:59.011 } 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81619 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81619 ']' 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81619 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:59.011 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81619 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81619' 00:09:59.271 killing process with pid 81619 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81619 00:09:59.271 [2024-11-28 18:50:28.641192] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81619 00:09:59.271 [2024-11-28 18:50:28.666866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MXQqlroIzA 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.271 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:59.532 00:09:59.532 real 0m3.197s 00:09:59.532 user 0m4.045s 00:09:59.532 sys 0m0.521s 00:09:59.532 ************************************ 00:09:59.532 END TEST raid_read_error_test 00:09:59.532 ************************************ 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.532 18:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.532 18:50:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:59.532 18:50:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.532 18:50:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.532 18:50:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.532 ************************************ 00:09:59.532 START TEST raid_write_error_test 00:09:59.532 ************************************ 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8x0Yf4kcDo 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81749 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81749 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81749 ']' 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.532 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.532 [2024-11-28 18:50:29.055372] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:59.532 [2024-11-28 18:50:29.055499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81749 ] 00:09:59.792 [2024-11-28 18:50:29.189152] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.792 [2024-11-28 18:50:29.226642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.792 [2024-11-28 18:50:29.251332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.792 [2024-11-28 18:50:29.292944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.792 [2024-11-28 18:50:29.293066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 BaseBdev1_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 true 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 [2024-11-28 18:50:29.904971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.361 [2024-11-28 18:50:29.905080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.361 [2024-11-28 18:50:29.905109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:00.361 [2024-11-28 18:50:29.905123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.361 [2024-11-28 18:50:29.907173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.361 [2024-11-28 18:50:29.907214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.361 BaseBdev1 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 BaseBdev2_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 true 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.361 [2024-11-28 18:50:29.945349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:00.361 [2024-11-28 18:50:29.945397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.361 [2024-11-28 18:50:29.945414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:00.361 [2024-11-28 18:50:29.945423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.361 [2024-11-28 18:50:29.947447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.361 [2024-11-28 18:50:29.947524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:00.361 BaseBdev2 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.361 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 BaseBdev3_malloc 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 true 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 [2024-11-28 18:50:29.994189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:00.620 [2024-11-28 18:50:29.994240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.620 [2024-11-28 18:50:29.994264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:00.620 [2024-11-28 18:50:29.994279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.620 [2024-11-28 18:50:29.996383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.620 [2024-11-28 18:50:29.996425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:00.620 BaseBdev3 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.620 18:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 [2024-11-28 18:50:30.002249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.620 [2024-11-28 18:50:30.004073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.620 [2024-11-28 18:50:30.004140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.620 [2024-11-28 18:50:30.004311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.620 [2024-11-28 18:50:30.004323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.620 [2024-11-28 18:50:30.004597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:10:00.620 [2024-11-28 18:50:30.004753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.620 [2024-11-28 18:50:30.004776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:00.620 [2024-11-28 18:50:30.004916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.620 "name": "raid_bdev1", 00:10:00.620 "uuid": "047da8c8-b138-41ed-b233-d5162a7e52f0", 00:10:00.620 "strip_size_kb": 0, 00:10:00.620 "state": "online", 00:10:00.620 "raid_level": "raid1", 00:10:00.620 "superblock": true, 00:10:00.620 "num_base_bdevs": 3, 00:10:00.620 "num_base_bdevs_discovered": 3, 00:10:00.620 "num_base_bdevs_operational": 3, 00:10:00.620 "base_bdevs_list": [ 00:10:00.620 { 00:10:00.620 "name": "BaseBdev1", 00:10:00.620 "uuid": "205ca031-c5c4-5d23-b42f-c3f01f9b12fa", 00:10:00.620 "is_configured": true, 00:10:00.620 "data_offset": 2048, 00:10:00.620 "data_size": 63488 00:10:00.620 }, 00:10:00.620 { 00:10:00.620 "name": "BaseBdev2", 00:10:00.620 "uuid": "d95a1f96-ac90-56ee-bacc-aa27f0c311dd", 00:10:00.620 "is_configured": true, 00:10:00.620 "data_offset": 2048, 00:10:00.620 "data_size": 63488 00:10:00.620 }, 00:10:00.620 { 00:10:00.620 "name": "BaseBdev3", 00:10:00.620 "uuid": "ed18be5b-cc45-5c9e-913a-1f8446dede0b", 00:10:00.620 "is_configured": true, 00:10:00.620 "data_offset": 2048, 00:10:00.620 "data_size": 63488 00:10:00.620 } 00:10:00.620 ] 00:10:00.620 }' 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.620 18:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:00.879 18:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.138 [2024-11-28 18:50:30.490726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.076 [2024-11-28 18:50:31.431852] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:02.076 [2024-11-28 18:50:31.431982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.076 [2024-11-28 18:50:31.432238] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006b10 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.076 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.077 "name": "raid_bdev1", 00:10:02.077 "uuid": "047da8c8-b138-41ed-b233-d5162a7e52f0", 00:10:02.077 "strip_size_kb": 0, 00:10:02.077 "state": "online", 00:10:02.077 "raid_level": "raid1", 00:10:02.077 "superblock": true, 00:10:02.077 "num_base_bdevs": 3, 00:10:02.077 "num_base_bdevs_discovered": 2, 00:10:02.077 "num_base_bdevs_operational": 2, 00:10:02.077 "base_bdevs_list": [ 00:10:02.077 { 00:10:02.077 "name": null, 00:10:02.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.077 "is_configured": false, 00:10:02.077 "data_offset": 0, 00:10:02.077 "data_size": 63488 00:10:02.077 }, 00:10:02.077 { 00:10:02.077 "name": "BaseBdev2", 00:10:02.077 "uuid": "d95a1f96-ac90-56ee-bacc-aa27f0c311dd", 00:10:02.077 "is_configured": true, 00:10:02.077 "data_offset": 2048, 00:10:02.077 "data_size": 63488 00:10:02.077 }, 00:10:02.077 { 00:10:02.077 "name": "BaseBdev3", 00:10:02.077 "uuid": "ed18be5b-cc45-5c9e-913a-1f8446dede0b", 00:10:02.077 "is_configured": true, 00:10:02.077 "data_offset": 2048, 00:10:02.077 "data_size": 63488 00:10:02.077 } 00:10:02.077 ] 00:10:02.077 }' 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.077 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.336 [2024-11-28 18:50:31.862490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.336 [2024-11-28 18:50:31.862589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.336 [2024-11-28 18:50:31.865080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.336 [2024-11-28 18:50:31.865184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.336 [2024-11-28 18:50:31.865279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.336 [2024-11-28 18:50:31.865369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:02.336 { 00:10:02.336 "results": [ 00:10:02.336 { 00:10:02.336 "job": "raid_bdev1", 00:10:02.336 "core_mask": "0x1", 00:10:02.336 "workload": "randrw", 00:10:02.336 "percentage": 50, 00:10:02.336 "status": "finished", 00:10:02.336 "queue_depth": 1, 00:10:02.336 "io_size": 131072, 00:10:02.336 "runtime": 1.369929, 00:10:02.336 "iops": 16369.461483040363, 00:10:02.336 "mibps": 2046.1826853800453, 00:10:02.336 "io_failed": 0, 00:10:02.336 "io_timeout": 0, 00:10:02.336 "avg_latency_us": 58.53686513855757, 00:10:02.336 "min_latency_us": 22.313257212586073, 00:10:02.336 "max_latency_us": 1378.0667654493159 00:10:02.336 } 00:10:02.336 ], 00:10:02.336 "core_count": 1 00:10:02.336 } 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81749 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81749 ']' 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81749 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81749 00:10:02.336 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.337 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.337 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81749' 00:10:02.337 killing process with pid 81749 00:10:02.337 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81749 00:10:02.337 [2024-11-28 18:50:31.907800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.337 18:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81749 00:10:02.337 [2024-11-28 18:50:31.932850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8x0Yf4kcDo 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:02.596 ************************************ 00:10:02.596 END TEST raid_write_error_test 00:10:02.596 ************************************ 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:02.596 00:10:02.596 real 0m3.192s 00:10:02.596 user 0m4.002s 00:10:02.596 sys 0m0.510s 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.596 18:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.857 18:50:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:02.857 18:50:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:02.857 18:50:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:02.857 18:50:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:02.857 18:50:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.857 18:50:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.857 ************************************ 00:10:02.857 START TEST raid_state_function_test 00:10:02.857 ************************************ 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81876 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81876' 00:10:02.857 Process raid pid: 81876 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81876 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81876 ']' 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.857 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.857 [2024-11-28 18:50:32.316657] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:02.857 [2024-11-28 18:50:32.316876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.857 [2024-11-28 18:50:32.451935] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.117 [2024-11-28 18:50:32.488251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.117 [2024-11-28 18:50:32.513232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.117 [2024-11-28 18:50:32.554790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.117 [2024-11-28 18:50:32.554901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.686 [2024-11-28 18:50:33.134093] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.686 [2024-11-28 18:50:33.134220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.686 [2024-11-28 18:50:33.134256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.686 [2024-11-28 18:50:33.134277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.686 [2024-11-28 18:50:33.134299] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.686 [2024-11-28 18:50:33.134318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.686 [2024-11-28 18:50:33.134337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.686 [2024-11-28 18:50:33.134355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.686 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.686 "name": "Existed_Raid", 00:10:03.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.686 "strip_size_kb": 64, 00:10:03.686 "state": "configuring", 00:10:03.686 "raid_level": "raid0", 00:10:03.686 "superblock": false, 00:10:03.686 "num_base_bdevs": 4, 00:10:03.686 "num_base_bdevs_discovered": 0, 00:10:03.686 "num_base_bdevs_operational": 4, 00:10:03.686 "base_bdevs_list": [ 00:10:03.686 { 00:10:03.687 "name": "BaseBdev1", 00:10:03.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.687 "is_configured": false, 00:10:03.687 "data_offset": 0, 00:10:03.687 "data_size": 0 00:10:03.687 }, 00:10:03.687 { 00:10:03.687 "name": "BaseBdev2", 00:10:03.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.687 "is_configured": false, 00:10:03.687 "data_offset": 0, 00:10:03.687 "data_size": 0 00:10:03.687 }, 00:10:03.687 { 00:10:03.687 "name": "BaseBdev3", 00:10:03.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.687 "is_configured": false, 00:10:03.687 "data_offset": 0, 00:10:03.687 "data_size": 0 00:10:03.687 }, 00:10:03.687 { 00:10:03.687 "name": "BaseBdev4", 00:10:03.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.687 "is_configured": false, 00:10:03.687 "data_offset": 0, 00:10:03.687 "data_size": 0 00:10:03.687 } 00:10:03.687 ] 00:10:03.687 }' 00:10:03.687 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.687 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 [2024-11-28 18:50:33.582107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.255 [2024-11-28 18:50:33.582184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 [2024-11-28 18:50:33.594146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.255 [2024-11-28 18:50:33.594218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.255 [2024-11-28 18:50:33.594246] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.255 [2024-11-28 18:50:33.594266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.255 [2024-11-28 18:50:33.594285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.255 [2024-11-28 18:50:33.594304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.255 [2024-11-28 18:50:33.594322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.255 [2024-11-28 18:50:33.594356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 [2024-11-28 18:50:33.614747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.255 BaseBdev1 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.255 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.255 [ 00:10:04.255 { 00:10:04.255 "name": "BaseBdev1", 00:10:04.255 "aliases": [ 00:10:04.255 "708f4134-0ecd-427b-b193-8db2ad47b996" 00:10:04.255 ], 00:10:04.255 "product_name": "Malloc disk", 00:10:04.255 "block_size": 512, 00:10:04.255 "num_blocks": 65536, 00:10:04.255 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:04.255 "assigned_rate_limits": { 00:10:04.255 "rw_ios_per_sec": 0, 00:10:04.255 "rw_mbytes_per_sec": 0, 00:10:04.255 "r_mbytes_per_sec": 0, 00:10:04.255 "w_mbytes_per_sec": 0 00:10:04.255 }, 00:10:04.255 "claimed": true, 00:10:04.255 "claim_type": "exclusive_write", 00:10:04.255 "zoned": false, 00:10:04.255 "supported_io_types": { 00:10:04.255 "read": true, 00:10:04.255 "write": true, 00:10:04.255 "unmap": true, 00:10:04.256 "flush": true, 00:10:04.256 "reset": true, 00:10:04.256 "nvme_admin": false, 00:10:04.256 "nvme_io": false, 00:10:04.256 "nvme_io_md": false, 00:10:04.256 "write_zeroes": true, 00:10:04.256 "zcopy": true, 00:10:04.256 "get_zone_info": false, 00:10:04.256 "zone_management": false, 00:10:04.256 "zone_append": false, 00:10:04.256 "compare": false, 00:10:04.256 "compare_and_write": false, 00:10:04.256 "abort": true, 00:10:04.256 "seek_hole": false, 00:10:04.256 "seek_data": false, 00:10:04.256 "copy": true, 00:10:04.256 "nvme_iov_md": false 00:10:04.256 }, 00:10:04.256 "memory_domains": [ 00:10:04.256 { 00:10:04.256 "dma_device_id": "system", 00:10:04.256 "dma_device_type": 1 00:10:04.256 }, 00:10:04.256 { 00:10:04.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.256 "dma_device_type": 2 00:10:04.256 } 00:10:04.256 ], 00:10:04.256 "driver_specific": {} 00:10:04.256 } 00:10:04.256 ] 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.256 "name": "Existed_Raid", 00:10:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.256 "strip_size_kb": 64, 00:10:04.256 "state": "configuring", 00:10:04.256 "raid_level": "raid0", 00:10:04.256 "superblock": false, 00:10:04.256 "num_base_bdevs": 4, 00:10:04.256 "num_base_bdevs_discovered": 1, 00:10:04.256 "num_base_bdevs_operational": 4, 00:10:04.256 "base_bdevs_list": [ 00:10:04.256 { 00:10:04.256 "name": "BaseBdev1", 00:10:04.256 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:04.256 "is_configured": true, 00:10:04.256 "data_offset": 0, 00:10:04.256 "data_size": 65536 00:10:04.256 }, 00:10:04.256 { 00:10:04.256 "name": "BaseBdev2", 00:10:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.256 "is_configured": false, 00:10:04.256 "data_offset": 0, 00:10:04.256 "data_size": 0 00:10:04.256 }, 00:10:04.256 { 00:10:04.256 "name": "BaseBdev3", 00:10:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.256 "is_configured": false, 00:10:04.256 "data_offset": 0, 00:10:04.256 "data_size": 0 00:10:04.256 }, 00:10:04.256 { 00:10:04.256 "name": "BaseBdev4", 00:10:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.256 "is_configured": false, 00:10:04.256 "data_offset": 0, 00:10:04.256 "data_size": 0 00:10:04.256 } 00:10:04.256 ] 00:10:04.256 }' 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.256 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.515 [2024-11-28 18:50:34.038893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.515 [2024-11-28 18:50:34.038945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.515 [2024-11-28 18:50:34.050972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.515 [2024-11-28 18:50:34.052798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.515 [2024-11-28 18:50:34.052866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.515 [2024-11-28 18:50:34.052896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.515 [2024-11-28 18:50:34.052916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.515 [2024-11-28 18:50:34.052935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.515 [2024-11-28 18:50:34.052953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.515 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.516 "name": "Existed_Raid", 00:10:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.516 "strip_size_kb": 64, 00:10:04.516 "state": "configuring", 00:10:04.516 "raid_level": "raid0", 00:10:04.516 "superblock": false, 00:10:04.516 "num_base_bdevs": 4, 00:10:04.516 "num_base_bdevs_discovered": 1, 00:10:04.516 "num_base_bdevs_operational": 4, 00:10:04.516 "base_bdevs_list": [ 00:10:04.516 { 00:10:04.516 "name": "BaseBdev1", 00:10:04.516 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:04.516 "is_configured": true, 00:10:04.516 "data_offset": 0, 00:10:04.516 "data_size": 65536 00:10:04.516 }, 00:10:04.516 { 00:10:04.516 "name": "BaseBdev2", 00:10:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.516 "is_configured": false, 00:10:04.516 "data_offset": 0, 00:10:04.516 "data_size": 0 00:10:04.516 }, 00:10:04.516 { 00:10:04.516 "name": "BaseBdev3", 00:10:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.516 "is_configured": false, 00:10:04.516 "data_offset": 0, 00:10:04.516 "data_size": 0 00:10:04.516 }, 00:10:04.516 { 00:10:04.516 "name": "BaseBdev4", 00:10:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.516 "is_configured": false, 00:10:04.516 "data_offset": 0, 00:10:04.516 "data_size": 0 00:10:04.516 } 00:10:04.516 ] 00:10:04.516 }' 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.516 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.085 [2024-11-28 18:50:34.457991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.085 BaseBdev2 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.085 [ 00:10:05.085 { 00:10:05.085 "name": "BaseBdev2", 00:10:05.085 "aliases": [ 00:10:05.085 "4e372257-2ef4-4c09-a636-ba7d7d1ea1db" 00:10:05.085 ], 00:10:05.085 "product_name": "Malloc disk", 00:10:05.085 "block_size": 512, 00:10:05.085 "num_blocks": 65536, 00:10:05.085 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:05.085 "assigned_rate_limits": { 00:10:05.085 "rw_ios_per_sec": 0, 00:10:05.085 "rw_mbytes_per_sec": 0, 00:10:05.085 "r_mbytes_per_sec": 0, 00:10:05.085 "w_mbytes_per_sec": 0 00:10:05.085 }, 00:10:05.085 "claimed": true, 00:10:05.085 "claim_type": "exclusive_write", 00:10:05.085 "zoned": false, 00:10:05.085 "supported_io_types": { 00:10:05.085 "read": true, 00:10:05.085 "write": true, 00:10:05.085 "unmap": true, 00:10:05.085 "flush": true, 00:10:05.085 "reset": true, 00:10:05.085 "nvme_admin": false, 00:10:05.085 "nvme_io": false, 00:10:05.085 "nvme_io_md": false, 00:10:05.085 "write_zeroes": true, 00:10:05.085 "zcopy": true, 00:10:05.085 "get_zone_info": false, 00:10:05.085 "zone_management": false, 00:10:05.085 "zone_append": false, 00:10:05.085 "compare": false, 00:10:05.085 "compare_and_write": false, 00:10:05.085 "abort": true, 00:10:05.085 "seek_hole": false, 00:10:05.085 "seek_data": false, 00:10:05.085 "copy": true, 00:10:05.085 "nvme_iov_md": false 00:10:05.085 }, 00:10:05.085 "memory_domains": [ 00:10:05.085 { 00:10:05.085 "dma_device_id": "system", 00:10:05.085 "dma_device_type": 1 00:10:05.085 }, 00:10:05.085 { 00:10:05.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.085 "dma_device_type": 2 00:10:05.085 } 00:10:05.085 ], 00:10:05.085 "driver_specific": {} 00:10:05.085 } 00:10:05.085 ] 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.085 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.086 "name": "Existed_Raid", 00:10:05.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.086 "strip_size_kb": 64, 00:10:05.086 "state": "configuring", 00:10:05.086 "raid_level": "raid0", 00:10:05.086 "superblock": false, 00:10:05.086 "num_base_bdevs": 4, 00:10:05.086 "num_base_bdevs_discovered": 2, 00:10:05.086 "num_base_bdevs_operational": 4, 00:10:05.086 "base_bdevs_list": [ 00:10:05.086 { 00:10:05.086 "name": "BaseBdev1", 00:10:05.086 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:05.086 "is_configured": true, 00:10:05.086 "data_offset": 0, 00:10:05.086 "data_size": 65536 00:10:05.086 }, 00:10:05.086 { 00:10:05.086 "name": "BaseBdev2", 00:10:05.086 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:05.086 "is_configured": true, 00:10:05.086 "data_offset": 0, 00:10:05.086 "data_size": 65536 00:10:05.086 }, 00:10:05.086 { 00:10:05.086 "name": "BaseBdev3", 00:10:05.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.086 "is_configured": false, 00:10:05.086 "data_offset": 0, 00:10:05.086 "data_size": 0 00:10:05.086 }, 00:10:05.086 { 00:10:05.086 "name": "BaseBdev4", 00:10:05.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.086 "is_configured": false, 00:10:05.086 "data_offset": 0, 00:10:05.086 "data_size": 0 00:10:05.086 } 00:10:05.086 ] 00:10:05.086 }' 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.086 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 [2024-11-28 18:50:34.929620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.346 BaseBdev3 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.605 [ 00:10:05.605 { 00:10:05.605 "name": "BaseBdev3", 00:10:05.605 "aliases": [ 00:10:05.605 "00e74120-51f7-4b51-b874-175639cb540d" 00:10:05.605 ], 00:10:05.605 "product_name": "Malloc disk", 00:10:05.606 "block_size": 512, 00:10:05.606 "num_blocks": 65536, 00:10:05.606 "uuid": "00e74120-51f7-4b51-b874-175639cb540d", 00:10:05.606 "assigned_rate_limits": { 00:10:05.606 "rw_ios_per_sec": 0, 00:10:05.606 "rw_mbytes_per_sec": 0, 00:10:05.606 "r_mbytes_per_sec": 0, 00:10:05.606 "w_mbytes_per_sec": 0 00:10:05.606 }, 00:10:05.606 "claimed": true, 00:10:05.606 "claim_type": "exclusive_write", 00:10:05.606 "zoned": false, 00:10:05.606 "supported_io_types": { 00:10:05.606 "read": true, 00:10:05.606 "write": true, 00:10:05.606 "unmap": true, 00:10:05.606 "flush": true, 00:10:05.606 "reset": true, 00:10:05.606 "nvme_admin": false, 00:10:05.606 "nvme_io": false, 00:10:05.606 "nvme_io_md": false, 00:10:05.606 "write_zeroes": true, 00:10:05.606 "zcopy": true, 00:10:05.606 "get_zone_info": false, 00:10:05.606 "zone_management": false, 00:10:05.606 "zone_append": false, 00:10:05.606 "compare": false, 00:10:05.606 "compare_and_write": false, 00:10:05.606 "abort": true, 00:10:05.606 "seek_hole": false, 00:10:05.606 "seek_data": false, 00:10:05.606 "copy": true, 00:10:05.606 "nvme_iov_md": false 00:10:05.606 }, 00:10:05.606 "memory_domains": [ 00:10:05.606 { 00:10:05.606 "dma_device_id": "system", 00:10:05.606 "dma_device_type": 1 00:10:05.606 }, 00:10:05.606 { 00:10:05.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.606 "dma_device_type": 2 00:10:05.606 } 00:10:05.606 ], 00:10:05.606 "driver_specific": {} 00:10:05.606 } 00:10:05.606 ] 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.606 "name": "Existed_Raid", 00:10:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.606 "strip_size_kb": 64, 00:10:05.606 "state": "configuring", 00:10:05.606 "raid_level": "raid0", 00:10:05.606 "superblock": false, 00:10:05.606 "num_base_bdevs": 4, 00:10:05.606 "num_base_bdevs_discovered": 3, 00:10:05.606 "num_base_bdevs_operational": 4, 00:10:05.606 "base_bdevs_list": [ 00:10:05.606 { 00:10:05.606 "name": "BaseBdev1", 00:10:05.606 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:05.606 "is_configured": true, 00:10:05.606 "data_offset": 0, 00:10:05.606 "data_size": 65536 00:10:05.606 }, 00:10:05.606 { 00:10:05.606 "name": "BaseBdev2", 00:10:05.606 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:05.606 "is_configured": true, 00:10:05.606 "data_offset": 0, 00:10:05.606 "data_size": 65536 00:10:05.606 }, 00:10:05.606 { 00:10:05.606 "name": "BaseBdev3", 00:10:05.606 "uuid": "00e74120-51f7-4b51-b874-175639cb540d", 00:10:05.606 "is_configured": true, 00:10:05.606 "data_offset": 0, 00:10:05.606 "data_size": 65536 00:10:05.606 }, 00:10:05.606 { 00:10:05.606 "name": "BaseBdev4", 00:10:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.606 "is_configured": false, 00:10:05.606 "data_offset": 0, 00:10:05.606 "data_size": 0 00:10:05.606 } 00:10:05.606 ] 00:10:05.606 }' 00:10:05.606 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.606 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.864 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.864 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 [2024-11-28 18:50:35.424834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.864 [2024-11-28 18:50:35.424872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:05.864 [2024-11-28 18:50:35.424883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:05.864 [2024-11-28 18:50:35.425161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:05.864 [2024-11-28 18:50:35.425295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:05.864 [2024-11-28 18:50:35.425305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:05.865 [2024-11-28 18:50:35.425546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.865 BaseBdev4 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.865 [ 00:10:05.865 { 00:10:05.865 "name": "BaseBdev4", 00:10:05.865 "aliases": [ 00:10:05.865 "2946097c-0cc0-439b-be79-1097719c4fdb" 00:10:05.865 ], 00:10:05.865 "product_name": "Malloc disk", 00:10:05.865 "block_size": 512, 00:10:05.865 "num_blocks": 65536, 00:10:05.865 "uuid": "2946097c-0cc0-439b-be79-1097719c4fdb", 00:10:05.865 "assigned_rate_limits": { 00:10:05.865 "rw_ios_per_sec": 0, 00:10:05.865 "rw_mbytes_per_sec": 0, 00:10:05.865 "r_mbytes_per_sec": 0, 00:10:05.865 "w_mbytes_per_sec": 0 00:10:05.865 }, 00:10:05.865 "claimed": true, 00:10:05.865 "claim_type": "exclusive_write", 00:10:05.865 "zoned": false, 00:10:05.865 "supported_io_types": { 00:10:05.865 "read": true, 00:10:05.865 "write": true, 00:10:05.865 "unmap": true, 00:10:05.865 "flush": true, 00:10:05.865 "reset": true, 00:10:05.865 "nvme_admin": false, 00:10:05.865 "nvme_io": false, 00:10:05.865 "nvme_io_md": false, 00:10:05.865 "write_zeroes": true, 00:10:05.865 "zcopy": true, 00:10:05.865 "get_zone_info": false, 00:10:05.865 "zone_management": false, 00:10:05.865 "zone_append": false, 00:10:05.865 "compare": false, 00:10:05.865 "compare_and_write": false, 00:10:05.865 "abort": true, 00:10:05.865 "seek_hole": false, 00:10:05.865 "seek_data": false, 00:10:05.865 "copy": true, 00:10:05.865 "nvme_iov_md": false 00:10:05.865 }, 00:10:05.865 "memory_domains": [ 00:10:05.865 { 00:10:05.865 "dma_device_id": "system", 00:10:05.865 "dma_device_type": 1 00:10:05.865 }, 00:10:05.865 { 00:10:05.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.865 "dma_device_type": 2 00:10:05.865 } 00:10:05.865 ], 00:10:05.865 "driver_specific": {} 00:10:05.865 } 00:10:05.865 ] 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.865 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.124 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.124 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.124 "name": "Existed_Raid", 00:10:06.124 "uuid": "585261a0-b1d5-4efd-b798-637e18ef3115", 00:10:06.124 "strip_size_kb": 64, 00:10:06.124 "state": "online", 00:10:06.124 "raid_level": "raid0", 00:10:06.124 "superblock": false, 00:10:06.124 "num_base_bdevs": 4, 00:10:06.124 "num_base_bdevs_discovered": 4, 00:10:06.124 "num_base_bdevs_operational": 4, 00:10:06.124 "base_bdevs_list": [ 00:10:06.124 { 00:10:06.124 "name": "BaseBdev1", 00:10:06.124 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": "BaseBdev2", 00:10:06.124 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": "BaseBdev3", 00:10:06.124 "uuid": "00e74120-51f7-4b51-b874-175639cb540d", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 }, 00:10:06.124 { 00:10:06.124 "name": "BaseBdev4", 00:10:06.124 "uuid": "2946097c-0cc0-439b-be79-1097719c4fdb", 00:10:06.124 "is_configured": true, 00:10:06.124 "data_offset": 0, 00:10:06.124 "data_size": 65536 00:10:06.124 } 00:10:06.124 ] 00:10:06.124 }' 00:10:06.124 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.124 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.405 [2024-11-28 18:50:35.877320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.405 "name": "Existed_Raid", 00:10:06.405 "aliases": [ 00:10:06.405 "585261a0-b1d5-4efd-b798-637e18ef3115" 00:10:06.405 ], 00:10:06.405 "product_name": "Raid Volume", 00:10:06.405 "block_size": 512, 00:10:06.405 "num_blocks": 262144, 00:10:06.405 "uuid": "585261a0-b1d5-4efd-b798-637e18ef3115", 00:10:06.405 "assigned_rate_limits": { 00:10:06.405 "rw_ios_per_sec": 0, 00:10:06.405 "rw_mbytes_per_sec": 0, 00:10:06.405 "r_mbytes_per_sec": 0, 00:10:06.405 "w_mbytes_per_sec": 0 00:10:06.405 }, 00:10:06.405 "claimed": false, 00:10:06.405 "zoned": false, 00:10:06.405 "supported_io_types": { 00:10:06.405 "read": true, 00:10:06.405 "write": true, 00:10:06.405 "unmap": true, 00:10:06.405 "flush": true, 00:10:06.405 "reset": true, 00:10:06.405 "nvme_admin": false, 00:10:06.405 "nvme_io": false, 00:10:06.405 "nvme_io_md": false, 00:10:06.405 "write_zeroes": true, 00:10:06.405 "zcopy": false, 00:10:06.405 "get_zone_info": false, 00:10:06.405 "zone_management": false, 00:10:06.405 "zone_append": false, 00:10:06.405 "compare": false, 00:10:06.405 "compare_and_write": false, 00:10:06.405 "abort": false, 00:10:06.405 "seek_hole": false, 00:10:06.405 "seek_data": false, 00:10:06.405 "copy": false, 00:10:06.405 "nvme_iov_md": false 00:10:06.405 }, 00:10:06.405 "memory_domains": [ 00:10:06.405 { 00:10:06.405 "dma_device_id": "system", 00:10:06.405 "dma_device_type": 1 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.405 "dma_device_type": 2 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "system", 00:10:06.405 "dma_device_type": 1 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.405 "dma_device_type": 2 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "system", 00:10:06.405 "dma_device_type": 1 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.405 "dma_device_type": 2 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "system", 00:10:06.405 "dma_device_type": 1 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.405 "dma_device_type": 2 00:10:06.405 } 00:10:06.405 ], 00:10:06.405 "driver_specific": { 00:10:06.405 "raid": { 00:10:06.405 "uuid": "585261a0-b1d5-4efd-b798-637e18ef3115", 00:10:06.405 "strip_size_kb": 64, 00:10:06.405 "state": "online", 00:10:06.405 "raid_level": "raid0", 00:10:06.405 "superblock": false, 00:10:06.405 "num_base_bdevs": 4, 00:10:06.405 "num_base_bdevs_discovered": 4, 00:10:06.405 "num_base_bdevs_operational": 4, 00:10:06.405 "base_bdevs_list": [ 00:10:06.405 { 00:10:06.405 "name": "BaseBdev1", 00:10:06.405 "uuid": "708f4134-0ecd-427b-b193-8db2ad47b996", 00:10:06.405 "is_configured": true, 00:10:06.405 "data_offset": 0, 00:10:06.405 "data_size": 65536 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "name": "BaseBdev2", 00:10:06.405 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:06.405 "is_configured": true, 00:10:06.405 "data_offset": 0, 00:10:06.405 "data_size": 65536 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "name": "BaseBdev3", 00:10:06.405 "uuid": "00e74120-51f7-4b51-b874-175639cb540d", 00:10:06.405 "is_configured": true, 00:10:06.405 "data_offset": 0, 00:10:06.405 "data_size": 65536 00:10:06.405 }, 00:10:06.405 { 00:10:06.405 "name": "BaseBdev4", 00:10:06.405 "uuid": "2946097c-0cc0-439b-be79-1097719c4fdb", 00:10:06.405 "is_configured": true, 00:10:06.405 "data_offset": 0, 00:10:06.405 "data_size": 65536 00:10:06.405 } 00:10:06.405 ] 00:10:06.405 } 00:10:06.405 } 00:10:06.405 }' 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.405 BaseBdev2 00:10:06.405 BaseBdev3 00:10:06.405 BaseBdev4' 00:10:06.405 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 [2024-11-28 18:50:36.229140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.689 [2024-11-28 18:50:36.229166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.689 [2024-11-28 18:50:36.229214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.689 "name": "Existed_Raid", 00:10:06.689 "uuid": "585261a0-b1d5-4efd-b798-637e18ef3115", 00:10:06.689 "strip_size_kb": 64, 00:10:06.689 "state": "offline", 00:10:06.689 "raid_level": "raid0", 00:10:06.689 "superblock": false, 00:10:06.689 "num_base_bdevs": 4, 00:10:06.689 "num_base_bdevs_discovered": 3, 00:10:06.689 "num_base_bdevs_operational": 3, 00:10:06.689 "base_bdevs_list": [ 00:10:06.689 { 00:10:06.689 "name": null, 00:10:06.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.689 "is_configured": false, 00:10:06.689 "data_offset": 0, 00:10:06.689 "data_size": 65536 00:10:06.689 }, 00:10:06.689 { 00:10:06.689 "name": "BaseBdev2", 00:10:06.689 "uuid": "4e372257-2ef4-4c09-a636-ba7d7d1ea1db", 00:10:06.689 "is_configured": true, 00:10:06.689 "data_offset": 0, 00:10:06.689 "data_size": 65536 00:10:06.689 }, 00:10:06.689 { 00:10:06.689 "name": "BaseBdev3", 00:10:06.689 "uuid": "00e74120-51f7-4b51-b874-175639cb540d", 00:10:06.689 "is_configured": true, 00:10:06.689 "data_offset": 0, 00:10:06.689 "data_size": 65536 00:10:06.689 }, 00:10:06.689 { 00:10:06.689 "name": "BaseBdev4", 00:10:06.689 "uuid": "2946097c-0cc0-439b-be79-1097719c4fdb", 00:10:06.690 "is_configured": true, 00:10:06.690 "data_offset": 0, 00:10:06.690 "data_size": 65536 00:10:06.690 } 00:10:06.690 ] 00:10:06.690 }' 00:10:06.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.690 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 [2024-11-28 18:50:36.696468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 [2024-11-28 18:50:36.763455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 [2024-11-28 18:50:36.834580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:07.259 [2024-11-28 18:50:36.834679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.259 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 BaseBdev2 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 [ 00:10:07.521 { 00:10:07.521 "name": "BaseBdev2", 00:10:07.521 "aliases": [ 00:10:07.521 "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c" 00:10:07.521 ], 00:10:07.521 "product_name": "Malloc disk", 00:10:07.521 "block_size": 512, 00:10:07.521 "num_blocks": 65536, 00:10:07.521 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:07.521 "assigned_rate_limits": { 00:10:07.521 "rw_ios_per_sec": 0, 00:10:07.521 "rw_mbytes_per_sec": 0, 00:10:07.521 "r_mbytes_per_sec": 0, 00:10:07.521 "w_mbytes_per_sec": 0 00:10:07.521 }, 00:10:07.521 "claimed": false, 00:10:07.521 "zoned": false, 00:10:07.521 "supported_io_types": { 00:10:07.521 "read": true, 00:10:07.521 "write": true, 00:10:07.521 "unmap": true, 00:10:07.521 "flush": true, 00:10:07.521 "reset": true, 00:10:07.521 "nvme_admin": false, 00:10:07.521 "nvme_io": false, 00:10:07.521 "nvme_io_md": false, 00:10:07.521 "write_zeroes": true, 00:10:07.521 "zcopy": true, 00:10:07.521 "get_zone_info": false, 00:10:07.521 "zone_management": false, 00:10:07.521 "zone_append": false, 00:10:07.521 "compare": false, 00:10:07.521 "compare_and_write": false, 00:10:07.521 "abort": true, 00:10:07.521 "seek_hole": false, 00:10:07.521 "seek_data": false, 00:10:07.521 "copy": true, 00:10:07.521 "nvme_iov_md": false 00:10:07.521 }, 00:10:07.521 "memory_domains": [ 00:10:07.521 { 00:10:07.521 "dma_device_id": "system", 00:10:07.521 "dma_device_type": 1 00:10:07.521 }, 00:10:07.521 { 00:10:07.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.521 "dma_device_type": 2 00:10:07.521 } 00:10:07.521 ], 00:10:07.521 "driver_specific": {} 00:10:07.521 } 00:10:07.521 ] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 BaseBdev3 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 [ 00:10:07.521 { 00:10:07.521 "name": "BaseBdev3", 00:10:07.521 "aliases": [ 00:10:07.521 "28783ee8-2663-4869-a97e-012a1e1cdcb1" 00:10:07.521 ], 00:10:07.521 "product_name": "Malloc disk", 00:10:07.521 "block_size": 512, 00:10:07.521 "num_blocks": 65536, 00:10:07.521 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:07.521 "assigned_rate_limits": { 00:10:07.521 "rw_ios_per_sec": 0, 00:10:07.521 "rw_mbytes_per_sec": 0, 00:10:07.521 "r_mbytes_per_sec": 0, 00:10:07.521 "w_mbytes_per_sec": 0 00:10:07.521 }, 00:10:07.521 "claimed": false, 00:10:07.521 "zoned": false, 00:10:07.521 "supported_io_types": { 00:10:07.521 "read": true, 00:10:07.521 "write": true, 00:10:07.521 "unmap": true, 00:10:07.521 "flush": true, 00:10:07.521 "reset": true, 00:10:07.521 "nvme_admin": false, 00:10:07.521 "nvme_io": false, 00:10:07.521 "nvme_io_md": false, 00:10:07.521 "write_zeroes": true, 00:10:07.521 "zcopy": true, 00:10:07.521 "get_zone_info": false, 00:10:07.521 "zone_management": false, 00:10:07.521 "zone_append": false, 00:10:07.521 "compare": false, 00:10:07.521 "compare_and_write": false, 00:10:07.521 "abort": true, 00:10:07.521 "seek_hole": false, 00:10:07.521 "seek_data": false, 00:10:07.521 "copy": true, 00:10:07.521 "nvme_iov_md": false 00:10:07.521 }, 00:10:07.521 "memory_domains": [ 00:10:07.521 { 00:10:07.521 "dma_device_id": "system", 00:10:07.521 "dma_device_type": 1 00:10:07.521 }, 00:10:07.521 { 00:10:07.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.521 "dma_device_type": 2 00:10:07.521 } 00:10:07.521 ], 00:10:07.521 "driver_specific": {} 00:10:07.521 } 00:10:07.521 ] 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 BaseBdev4 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.521 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 [ 00:10:07.521 { 00:10:07.521 "name": "BaseBdev4", 00:10:07.521 "aliases": [ 00:10:07.521 "9740d621-b179-4610-93fb-c63cce2cb78d" 00:10:07.521 ], 00:10:07.521 "product_name": "Malloc disk", 00:10:07.521 "block_size": 512, 00:10:07.522 "num_blocks": 65536, 00:10:07.522 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:07.522 "assigned_rate_limits": { 00:10:07.522 "rw_ios_per_sec": 0, 00:10:07.522 "rw_mbytes_per_sec": 0, 00:10:07.522 "r_mbytes_per_sec": 0, 00:10:07.522 "w_mbytes_per_sec": 0 00:10:07.522 }, 00:10:07.522 "claimed": false, 00:10:07.522 "zoned": false, 00:10:07.522 "supported_io_types": { 00:10:07.522 "read": true, 00:10:07.522 "write": true, 00:10:07.522 "unmap": true, 00:10:07.522 "flush": true, 00:10:07.522 "reset": true, 00:10:07.522 "nvme_admin": false, 00:10:07.522 "nvme_io": false, 00:10:07.522 "nvme_io_md": false, 00:10:07.522 "write_zeroes": true, 00:10:07.522 "zcopy": true, 00:10:07.522 "get_zone_info": false, 00:10:07.522 "zone_management": false, 00:10:07.522 "zone_append": false, 00:10:07.522 "compare": false, 00:10:07.522 "compare_and_write": false, 00:10:07.522 "abort": true, 00:10:07.522 "seek_hole": false, 00:10:07.522 "seek_data": false, 00:10:07.522 "copy": true, 00:10:07.522 "nvme_iov_md": false 00:10:07.522 }, 00:10:07.522 "memory_domains": [ 00:10:07.522 { 00:10:07.522 "dma_device_id": "system", 00:10:07.522 "dma_device_type": 1 00:10:07.522 }, 00:10:07.522 { 00:10:07.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.522 "dma_device_type": 2 00:10:07.522 } 00:10:07.522 ], 00:10:07.522 "driver_specific": {} 00:10:07.522 } 00:10:07.522 ] 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.522 [2024-11-28 18:50:37.066869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.522 [2024-11-28 18:50:37.066959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.522 [2024-11-28 18:50:37.067000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.522 [2024-11-28 18:50:37.068812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.522 [2024-11-28 18:50:37.068913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.522 "name": "Existed_Raid", 00:10:07.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.522 "strip_size_kb": 64, 00:10:07.522 "state": "configuring", 00:10:07.522 "raid_level": "raid0", 00:10:07.522 "superblock": false, 00:10:07.522 "num_base_bdevs": 4, 00:10:07.522 "num_base_bdevs_discovered": 3, 00:10:07.522 "num_base_bdevs_operational": 4, 00:10:07.522 "base_bdevs_list": [ 00:10:07.522 { 00:10:07.522 "name": "BaseBdev1", 00:10:07.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.522 "is_configured": false, 00:10:07.522 "data_offset": 0, 00:10:07.522 "data_size": 0 00:10:07.522 }, 00:10:07.522 { 00:10:07.522 "name": "BaseBdev2", 00:10:07.522 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:07.522 "is_configured": true, 00:10:07.522 "data_offset": 0, 00:10:07.522 "data_size": 65536 00:10:07.522 }, 00:10:07.522 { 00:10:07.522 "name": "BaseBdev3", 00:10:07.522 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:07.522 "is_configured": true, 00:10:07.522 "data_offset": 0, 00:10:07.522 "data_size": 65536 00:10:07.522 }, 00:10:07.522 { 00:10:07.522 "name": "BaseBdev4", 00:10:07.522 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:07.522 "is_configured": true, 00:10:07.522 "data_offset": 0, 00:10:07.522 "data_size": 65536 00:10:07.522 } 00:10:07.522 ] 00:10:07.522 }' 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.522 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 [2024-11-28 18:50:37.510987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.092 "name": "Existed_Raid", 00:10:08.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.092 "strip_size_kb": 64, 00:10:08.092 "state": "configuring", 00:10:08.092 "raid_level": "raid0", 00:10:08.092 "superblock": false, 00:10:08.092 "num_base_bdevs": 4, 00:10:08.092 "num_base_bdevs_discovered": 2, 00:10:08.092 "num_base_bdevs_operational": 4, 00:10:08.092 "base_bdevs_list": [ 00:10:08.092 { 00:10:08.092 "name": "BaseBdev1", 00:10:08.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.092 "is_configured": false, 00:10:08.092 "data_offset": 0, 00:10:08.092 "data_size": 0 00:10:08.092 }, 00:10:08.092 { 00:10:08.092 "name": null, 00:10:08.092 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:08.092 "is_configured": false, 00:10:08.092 "data_offset": 0, 00:10:08.092 "data_size": 65536 00:10:08.092 }, 00:10:08.092 { 00:10:08.092 "name": "BaseBdev3", 00:10:08.092 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:08.092 "is_configured": true, 00:10:08.092 "data_offset": 0, 00:10:08.092 "data_size": 65536 00:10:08.092 }, 00:10:08.092 { 00:10:08.092 "name": "BaseBdev4", 00:10:08.092 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:08.092 "is_configured": true, 00:10:08.092 "data_offset": 0, 00:10:08.092 "data_size": 65536 00:10:08.092 } 00:10:08.092 ] 00:10:08.092 }' 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.092 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.352 [2024-11-28 18:50:37.954194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.352 BaseBdev1 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:08.352 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.612 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.612 [ 00:10:08.612 { 00:10:08.612 "name": "BaseBdev1", 00:10:08.612 "aliases": [ 00:10:08.612 "62742faa-30c8-4340-b69b-ae6872e53011" 00:10:08.612 ], 00:10:08.612 "product_name": "Malloc disk", 00:10:08.613 "block_size": 512, 00:10:08.613 "num_blocks": 65536, 00:10:08.613 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:08.613 "assigned_rate_limits": { 00:10:08.613 "rw_ios_per_sec": 0, 00:10:08.613 "rw_mbytes_per_sec": 0, 00:10:08.613 "r_mbytes_per_sec": 0, 00:10:08.613 "w_mbytes_per_sec": 0 00:10:08.613 }, 00:10:08.613 "claimed": true, 00:10:08.613 "claim_type": "exclusive_write", 00:10:08.613 "zoned": false, 00:10:08.613 "supported_io_types": { 00:10:08.613 "read": true, 00:10:08.613 "write": true, 00:10:08.613 "unmap": true, 00:10:08.613 "flush": true, 00:10:08.613 "reset": true, 00:10:08.613 "nvme_admin": false, 00:10:08.613 "nvme_io": false, 00:10:08.613 "nvme_io_md": false, 00:10:08.613 "write_zeroes": true, 00:10:08.613 "zcopy": true, 00:10:08.613 "get_zone_info": false, 00:10:08.613 "zone_management": false, 00:10:08.613 "zone_append": false, 00:10:08.613 "compare": false, 00:10:08.613 "compare_and_write": false, 00:10:08.613 "abort": true, 00:10:08.613 "seek_hole": false, 00:10:08.613 "seek_data": false, 00:10:08.613 "copy": true, 00:10:08.613 "nvme_iov_md": false 00:10:08.613 }, 00:10:08.613 "memory_domains": [ 00:10:08.613 { 00:10:08.613 "dma_device_id": "system", 00:10:08.613 "dma_device_type": 1 00:10:08.613 }, 00:10:08.613 { 00:10:08.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.613 "dma_device_type": 2 00:10:08.613 } 00:10:08.613 ], 00:10:08.613 "driver_specific": {} 00:10:08.613 } 00:10:08.613 ] 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.613 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.613 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.613 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.613 "name": "Existed_Raid", 00:10:08.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.613 "strip_size_kb": 64, 00:10:08.613 "state": "configuring", 00:10:08.613 "raid_level": "raid0", 00:10:08.613 "superblock": false, 00:10:08.613 "num_base_bdevs": 4, 00:10:08.613 "num_base_bdevs_discovered": 3, 00:10:08.613 "num_base_bdevs_operational": 4, 00:10:08.613 "base_bdevs_list": [ 00:10:08.613 { 00:10:08.613 "name": "BaseBdev1", 00:10:08.613 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:08.613 "is_configured": true, 00:10:08.613 "data_offset": 0, 00:10:08.613 "data_size": 65536 00:10:08.613 }, 00:10:08.613 { 00:10:08.613 "name": null, 00:10:08.613 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:08.613 "is_configured": false, 00:10:08.613 "data_offset": 0, 00:10:08.613 "data_size": 65536 00:10:08.613 }, 00:10:08.613 { 00:10:08.613 "name": "BaseBdev3", 00:10:08.613 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:08.613 "is_configured": true, 00:10:08.613 "data_offset": 0, 00:10:08.613 "data_size": 65536 00:10:08.613 }, 00:10:08.613 { 00:10:08.613 "name": "BaseBdev4", 00:10:08.613 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:08.613 "is_configured": true, 00:10:08.613 "data_offset": 0, 00:10:08.613 "data_size": 65536 00:10:08.613 } 00:10:08.613 ] 00:10:08.613 }' 00:10:08.613 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.613 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.873 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.874 [2024-11-28 18:50:38.470375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.134 "name": "Existed_Raid", 00:10:09.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.134 "strip_size_kb": 64, 00:10:09.134 "state": "configuring", 00:10:09.134 "raid_level": "raid0", 00:10:09.134 "superblock": false, 00:10:09.134 "num_base_bdevs": 4, 00:10:09.134 "num_base_bdevs_discovered": 2, 00:10:09.134 "num_base_bdevs_operational": 4, 00:10:09.134 "base_bdevs_list": [ 00:10:09.134 { 00:10:09.134 "name": "BaseBdev1", 00:10:09.134 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:09.134 "is_configured": true, 00:10:09.134 "data_offset": 0, 00:10:09.134 "data_size": 65536 00:10:09.134 }, 00:10:09.134 { 00:10:09.134 "name": null, 00:10:09.134 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:09.134 "is_configured": false, 00:10:09.134 "data_offset": 0, 00:10:09.134 "data_size": 65536 00:10:09.134 }, 00:10:09.134 { 00:10:09.134 "name": null, 00:10:09.134 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:09.134 "is_configured": false, 00:10:09.134 "data_offset": 0, 00:10:09.134 "data_size": 65536 00:10:09.134 }, 00:10:09.134 { 00:10:09.134 "name": "BaseBdev4", 00:10:09.134 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:09.134 "is_configured": true, 00:10:09.134 "data_offset": 0, 00:10:09.134 "data_size": 65536 00:10:09.134 } 00:10:09.134 ] 00:10:09.134 }' 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.134 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.394 [2024-11-28 18:50:38.942559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.394 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.654 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.654 "name": "Existed_Raid", 00:10:09.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.654 "strip_size_kb": 64, 00:10:09.654 "state": "configuring", 00:10:09.654 "raid_level": "raid0", 00:10:09.654 "superblock": false, 00:10:09.654 "num_base_bdevs": 4, 00:10:09.654 "num_base_bdevs_discovered": 3, 00:10:09.654 "num_base_bdevs_operational": 4, 00:10:09.654 "base_bdevs_list": [ 00:10:09.654 { 00:10:09.654 "name": "BaseBdev1", 00:10:09.654 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:09.654 "is_configured": true, 00:10:09.654 "data_offset": 0, 00:10:09.654 "data_size": 65536 00:10:09.654 }, 00:10:09.654 { 00:10:09.654 "name": null, 00:10:09.654 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:09.654 "is_configured": false, 00:10:09.654 "data_offset": 0, 00:10:09.654 "data_size": 65536 00:10:09.654 }, 00:10:09.654 { 00:10:09.654 "name": "BaseBdev3", 00:10:09.654 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:09.654 "is_configured": true, 00:10:09.654 "data_offset": 0, 00:10:09.654 "data_size": 65536 00:10:09.654 }, 00:10:09.654 { 00:10:09.654 "name": "BaseBdev4", 00:10:09.654 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:09.654 "is_configured": true, 00:10:09.654 "data_offset": 0, 00:10:09.654 "data_size": 65536 00:10:09.654 } 00:10:09.654 ] 00:10:09.654 }' 00:10:09.654 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.654 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 [2024-11-28 18:50:39.402707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.914 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.915 "name": "Existed_Raid", 00:10:09.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.915 "strip_size_kb": 64, 00:10:09.915 "state": "configuring", 00:10:09.915 "raid_level": "raid0", 00:10:09.915 "superblock": false, 00:10:09.915 "num_base_bdevs": 4, 00:10:09.915 "num_base_bdevs_discovered": 2, 00:10:09.915 "num_base_bdevs_operational": 4, 00:10:09.915 "base_bdevs_list": [ 00:10:09.915 { 00:10:09.915 "name": null, 00:10:09.915 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:09.915 "is_configured": false, 00:10:09.915 "data_offset": 0, 00:10:09.915 "data_size": 65536 00:10:09.915 }, 00:10:09.915 { 00:10:09.915 "name": null, 00:10:09.915 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:09.915 "is_configured": false, 00:10:09.915 "data_offset": 0, 00:10:09.915 "data_size": 65536 00:10:09.915 }, 00:10:09.915 { 00:10:09.915 "name": "BaseBdev3", 00:10:09.915 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:09.915 "is_configured": true, 00:10:09.915 "data_offset": 0, 00:10:09.915 "data_size": 65536 00:10:09.915 }, 00:10:09.915 { 00:10:09.915 "name": "BaseBdev4", 00:10:09.915 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:09.915 "is_configured": true, 00:10:09.915 "data_offset": 0, 00:10:09.915 "data_size": 65536 00:10:09.915 } 00:10:09.915 ] 00:10:09.915 }' 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.915 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.484 [2024-11-28 18:50:39.865280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.484 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.485 "name": "Existed_Raid", 00:10:10.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.485 "strip_size_kb": 64, 00:10:10.485 "state": "configuring", 00:10:10.485 "raid_level": "raid0", 00:10:10.485 "superblock": false, 00:10:10.485 "num_base_bdevs": 4, 00:10:10.485 "num_base_bdevs_discovered": 3, 00:10:10.485 "num_base_bdevs_operational": 4, 00:10:10.485 "base_bdevs_list": [ 00:10:10.485 { 00:10:10.485 "name": null, 00:10:10.485 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:10.485 "is_configured": false, 00:10:10.485 "data_offset": 0, 00:10:10.485 "data_size": 65536 00:10:10.485 }, 00:10:10.485 { 00:10:10.485 "name": "BaseBdev2", 00:10:10.485 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:10.485 "is_configured": true, 00:10:10.485 "data_offset": 0, 00:10:10.485 "data_size": 65536 00:10:10.485 }, 00:10:10.485 { 00:10:10.485 "name": "BaseBdev3", 00:10:10.485 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:10.485 "is_configured": true, 00:10:10.485 "data_offset": 0, 00:10:10.485 "data_size": 65536 00:10:10.485 }, 00:10:10.485 { 00:10:10.485 "name": "BaseBdev4", 00:10:10.485 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:10.485 "is_configured": true, 00:10:10.485 "data_offset": 0, 00:10:10.485 "data_size": 65536 00:10:10.485 } 00:10:10.485 ] 00:10:10.485 }' 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.485 18:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.744 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.744 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.744 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.744 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.744 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62742faa-30c8-4340-b69b-ae6872e53011 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.005 [2024-11-28 18:50:40.416254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:11.005 [2024-11-28 18:50:40.416352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.005 [2024-11-28 18:50:40.416379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:11.005 [2024-11-28 18:50:40.416687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:11.005 [2024-11-28 18:50:40.416841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.005 [2024-11-28 18:50:40.416877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.005 [2024-11-28 18:50:40.417076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.005 NewBaseBdev 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.005 [ 00:10:11.005 { 00:10:11.005 "name": "NewBaseBdev", 00:10:11.005 "aliases": [ 00:10:11.005 "62742faa-30c8-4340-b69b-ae6872e53011" 00:10:11.005 ], 00:10:11.005 "product_name": "Malloc disk", 00:10:11.005 "block_size": 512, 00:10:11.005 "num_blocks": 65536, 00:10:11.005 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:11.005 "assigned_rate_limits": { 00:10:11.005 "rw_ios_per_sec": 0, 00:10:11.005 "rw_mbytes_per_sec": 0, 00:10:11.005 "r_mbytes_per_sec": 0, 00:10:11.005 "w_mbytes_per_sec": 0 00:10:11.005 }, 00:10:11.005 "claimed": true, 00:10:11.005 "claim_type": "exclusive_write", 00:10:11.005 "zoned": false, 00:10:11.005 "supported_io_types": { 00:10:11.005 "read": true, 00:10:11.005 "write": true, 00:10:11.005 "unmap": true, 00:10:11.005 "flush": true, 00:10:11.005 "reset": true, 00:10:11.005 "nvme_admin": false, 00:10:11.005 "nvme_io": false, 00:10:11.005 "nvme_io_md": false, 00:10:11.005 "write_zeroes": true, 00:10:11.005 "zcopy": true, 00:10:11.005 "get_zone_info": false, 00:10:11.005 "zone_management": false, 00:10:11.005 "zone_append": false, 00:10:11.005 "compare": false, 00:10:11.005 "compare_and_write": false, 00:10:11.005 "abort": true, 00:10:11.005 "seek_hole": false, 00:10:11.005 "seek_data": false, 00:10:11.005 "copy": true, 00:10:11.005 "nvme_iov_md": false 00:10:11.005 }, 00:10:11.005 "memory_domains": [ 00:10:11.005 { 00:10:11.005 "dma_device_id": "system", 00:10:11.005 "dma_device_type": 1 00:10:11.005 }, 00:10:11.005 { 00:10:11.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.005 "dma_device_type": 2 00:10:11.005 } 00:10:11.005 ], 00:10:11.005 "driver_specific": {} 00:10:11.005 } 00:10:11.005 ] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.005 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.005 "name": "Existed_Raid", 00:10:11.005 "uuid": "3eb81671-063a-4610-89d4-1cce8ae33ddb", 00:10:11.005 "strip_size_kb": 64, 00:10:11.005 "state": "online", 00:10:11.005 "raid_level": "raid0", 00:10:11.005 "superblock": false, 00:10:11.005 "num_base_bdevs": 4, 00:10:11.005 "num_base_bdevs_discovered": 4, 00:10:11.005 "num_base_bdevs_operational": 4, 00:10:11.005 "base_bdevs_list": [ 00:10:11.005 { 00:10:11.005 "name": "NewBaseBdev", 00:10:11.005 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:11.005 "is_configured": true, 00:10:11.005 "data_offset": 0, 00:10:11.005 "data_size": 65536 00:10:11.005 }, 00:10:11.005 { 00:10:11.005 "name": "BaseBdev2", 00:10:11.006 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:11.006 "is_configured": true, 00:10:11.006 "data_offset": 0, 00:10:11.006 "data_size": 65536 00:10:11.006 }, 00:10:11.006 { 00:10:11.006 "name": "BaseBdev3", 00:10:11.006 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:11.006 "is_configured": true, 00:10:11.006 "data_offset": 0, 00:10:11.006 "data_size": 65536 00:10:11.006 }, 00:10:11.006 { 00:10:11.006 "name": "BaseBdev4", 00:10:11.006 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:11.006 "is_configured": true, 00:10:11.006 "data_offset": 0, 00:10:11.006 "data_size": 65536 00:10:11.006 } 00:10:11.006 ] 00:10:11.006 }' 00:10:11.006 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.006 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.265 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.266 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.266 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.266 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.266 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.266 [2024-11-28 18:50:40.856720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.526 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.526 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.526 "name": "Existed_Raid", 00:10:11.526 "aliases": [ 00:10:11.526 "3eb81671-063a-4610-89d4-1cce8ae33ddb" 00:10:11.526 ], 00:10:11.526 "product_name": "Raid Volume", 00:10:11.526 "block_size": 512, 00:10:11.526 "num_blocks": 262144, 00:10:11.526 "uuid": "3eb81671-063a-4610-89d4-1cce8ae33ddb", 00:10:11.526 "assigned_rate_limits": { 00:10:11.526 "rw_ios_per_sec": 0, 00:10:11.526 "rw_mbytes_per_sec": 0, 00:10:11.526 "r_mbytes_per_sec": 0, 00:10:11.526 "w_mbytes_per_sec": 0 00:10:11.526 }, 00:10:11.526 "claimed": false, 00:10:11.526 "zoned": false, 00:10:11.526 "supported_io_types": { 00:10:11.526 "read": true, 00:10:11.526 "write": true, 00:10:11.526 "unmap": true, 00:10:11.526 "flush": true, 00:10:11.526 "reset": true, 00:10:11.526 "nvme_admin": false, 00:10:11.526 "nvme_io": false, 00:10:11.526 "nvme_io_md": false, 00:10:11.526 "write_zeroes": true, 00:10:11.526 "zcopy": false, 00:10:11.526 "get_zone_info": false, 00:10:11.526 "zone_management": false, 00:10:11.526 "zone_append": false, 00:10:11.526 "compare": false, 00:10:11.526 "compare_and_write": false, 00:10:11.526 "abort": false, 00:10:11.526 "seek_hole": false, 00:10:11.526 "seek_data": false, 00:10:11.526 "copy": false, 00:10:11.526 "nvme_iov_md": false 00:10:11.526 }, 00:10:11.526 "memory_domains": [ 00:10:11.526 { 00:10:11.526 "dma_device_id": "system", 00:10:11.526 "dma_device_type": 1 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.526 "dma_device_type": 2 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "system", 00:10:11.526 "dma_device_type": 1 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.526 "dma_device_type": 2 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "system", 00:10:11.526 "dma_device_type": 1 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.526 "dma_device_type": 2 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "system", 00:10:11.526 "dma_device_type": 1 00:10:11.526 }, 00:10:11.526 { 00:10:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.526 "dma_device_type": 2 00:10:11.526 } 00:10:11.526 ], 00:10:11.526 "driver_specific": { 00:10:11.526 "raid": { 00:10:11.526 "uuid": "3eb81671-063a-4610-89d4-1cce8ae33ddb", 00:10:11.526 "strip_size_kb": 64, 00:10:11.526 "state": "online", 00:10:11.526 "raid_level": "raid0", 00:10:11.526 "superblock": false, 00:10:11.526 "num_base_bdevs": 4, 00:10:11.526 "num_base_bdevs_discovered": 4, 00:10:11.527 "num_base_bdevs_operational": 4, 00:10:11.527 "base_bdevs_list": [ 00:10:11.527 { 00:10:11.527 "name": "NewBaseBdev", 00:10:11.527 "uuid": "62742faa-30c8-4340-b69b-ae6872e53011", 00:10:11.527 "is_configured": true, 00:10:11.527 "data_offset": 0, 00:10:11.527 "data_size": 65536 00:10:11.527 }, 00:10:11.527 { 00:10:11.527 "name": "BaseBdev2", 00:10:11.527 "uuid": "dcca0ad0-a7e1-4f5a-97a3-9442761f1c6c", 00:10:11.527 "is_configured": true, 00:10:11.527 "data_offset": 0, 00:10:11.527 "data_size": 65536 00:10:11.527 }, 00:10:11.527 { 00:10:11.527 "name": "BaseBdev3", 00:10:11.527 "uuid": "28783ee8-2663-4869-a97e-012a1e1cdcb1", 00:10:11.527 "is_configured": true, 00:10:11.527 "data_offset": 0, 00:10:11.527 "data_size": 65536 00:10:11.527 }, 00:10:11.527 { 00:10:11.527 "name": "BaseBdev4", 00:10:11.527 "uuid": "9740d621-b179-4610-93fb-c63cce2cb78d", 00:10:11.527 "is_configured": true, 00:10:11.527 "data_offset": 0, 00:10:11.527 "data_size": 65536 00:10:11.527 } 00:10:11.527 ] 00:10:11.527 } 00:10:11.527 } 00:10:11.527 }' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.527 BaseBdev2 00:10:11.527 BaseBdev3 00:10:11.527 BaseBdev4' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.527 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.527 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.788 [2024-11-28 18:50:41.168496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.788 [2024-11-28 18:50:41.168575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.788 [2024-11-28 18:50:41.168675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.788 [2024-11-28 18:50:41.168741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.788 [2024-11-28 18:50:41.168770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81876 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81876 ']' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81876 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81876 00:10:11.788 killing process with pid 81876 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81876' 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 81876 00:10:11.788 [2024-11-28 18:50:41.217745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.788 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 81876 00:10:11.788 [2024-11-28 18:50:41.257328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.049 00:10:12.049 real 0m9.257s 00:10:12.049 user 0m15.875s 00:10:12.049 sys 0m1.905s 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.049 ************************************ 00:10:12.049 END TEST raid_state_function_test 00:10:12.049 ************************************ 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.049 18:50:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:12.049 18:50:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.049 18:50:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.049 18:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.049 ************************************ 00:10:12.049 START TEST raid_state_function_test_sb 00:10:12.049 ************************************ 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82521 00:10:12.049 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82521' 00:10:12.050 Process raid pid: 82521 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82521 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82521 ']' 00:10:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.050 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 [2024-11-28 18:50:41.648387] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:12.050 [2024-11-28 18:50:41.648518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.310 [2024-11-28 18:50:41.785973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:12.310 [2024-11-28 18:50:41.821050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.310 [2024-11-28 18:50:41.846974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.310 [2024-11-28 18:50:41.888848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.310 [2024-11-28 18:50:41.888951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.879 [2024-11-28 18:50:42.468363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.879 [2024-11-28 18:50:42.468419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.879 [2024-11-28 18:50:42.468443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.879 [2024-11-28 18:50:42.468451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.879 [2024-11-28 18:50:42.468478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.879 [2024-11-28 18:50:42.468485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.879 [2024-11-28 18:50:42.468494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.879 [2024-11-28 18:50:42.468501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.879 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.880 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.141 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.141 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.141 "name": "Existed_Raid", 00:10:13.141 "uuid": "cd2780fd-5cfc-4291-90a9-41139ba6b280", 00:10:13.141 "strip_size_kb": 64, 00:10:13.141 "state": "configuring", 00:10:13.141 "raid_level": "raid0", 00:10:13.141 "superblock": true, 00:10:13.141 "num_base_bdevs": 4, 00:10:13.141 "num_base_bdevs_discovered": 0, 00:10:13.141 "num_base_bdevs_operational": 4, 00:10:13.141 "base_bdevs_list": [ 00:10:13.141 { 00:10:13.141 "name": "BaseBdev1", 00:10:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.141 "is_configured": false, 00:10:13.141 "data_offset": 0, 00:10:13.141 "data_size": 0 00:10:13.141 }, 00:10:13.141 { 00:10:13.141 "name": "BaseBdev2", 00:10:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.142 "is_configured": false, 00:10:13.142 "data_offset": 0, 00:10:13.142 "data_size": 0 00:10:13.142 }, 00:10:13.142 { 00:10:13.142 "name": "BaseBdev3", 00:10:13.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.142 "is_configured": false, 00:10:13.142 "data_offset": 0, 00:10:13.142 "data_size": 0 00:10:13.142 }, 00:10:13.142 { 00:10:13.142 "name": "BaseBdev4", 00:10:13.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.142 "is_configured": false, 00:10:13.142 "data_offset": 0, 00:10:13.142 "data_size": 0 00:10:13.142 } 00:10:13.142 ] 00:10:13.142 }' 00:10:13.142 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.142 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 [2024-11-28 18:50:42.900362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.402 [2024-11-28 18:50:42.900445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 [2024-11-28 18:50:42.908413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.402 [2024-11-28 18:50:42.908492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.402 [2024-11-28 18:50:42.908522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.402 [2024-11-28 18:50:42.908542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.402 [2024-11-28 18:50:42.908561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.402 [2024-11-28 18:50:42.908580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.402 [2024-11-28 18:50:42.908599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.402 [2024-11-28 18:50:42.908617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 [2024-11-28 18:50:42.925314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.402 BaseBdev1 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.402 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 [ 00:10:13.402 { 00:10:13.402 "name": "BaseBdev1", 00:10:13.402 "aliases": [ 00:10:13.402 "fd2d209c-2e3f-47ac-b28c-5047273e967c" 00:10:13.402 ], 00:10:13.402 "product_name": "Malloc disk", 00:10:13.402 "block_size": 512, 00:10:13.402 "num_blocks": 65536, 00:10:13.402 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:13.402 "assigned_rate_limits": { 00:10:13.402 "rw_ios_per_sec": 0, 00:10:13.402 "rw_mbytes_per_sec": 0, 00:10:13.402 "r_mbytes_per_sec": 0, 00:10:13.402 "w_mbytes_per_sec": 0 00:10:13.402 }, 00:10:13.402 "claimed": true, 00:10:13.402 "claim_type": "exclusive_write", 00:10:13.402 "zoned": false, 00:10:13.402 "supported_io_types": { 00:10:13.402 "read": true, 00:10:13.402 "write": true, 00:10:13.402 "unmap": true, 00:10:13.402 "flush": true, 00:10:13.402 "reset": true, 00:10:13.402 "nvme_admin": false, 00:10:13.402 "nvme_io": false, 00:10:13.402 "nvme_io_md": false, 00:10:13.402 "write_zeroes": true, 00:10:13.402 "zcopy": true, 00:10:13.402 "get_zone_info": false, 00:10:13.402 "zone_management": false, 00:10:13.402 "zone_append": false, 00:10:13.402 "compare": false, 00:10:13.402 "compare_and_write": false, 00:10:13.402 "abort": true, 00:10:13.402 "seek_hole": false, 00:10:13.402 "seek_data": false, 00:10:13.402 "copy": true, 00:10:13.402 "nvme_iov_md": false 00:10:13.402 }, 00:10:13.403 "memory_domains": [ 00:10:13.403 { 00:10:13.403 "dma_device_id": "system", 00:10:13.403 "dma_device_type": 1 00:10:13.403 }, 00:10:13.403 { 00:10:13.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.403 "dma_device_type": 2 00:10:13.403 } 00:10:13.403 ], 00:10:13.403 "driver_specific": {} 00:10:13.403 } 00:10:13.403 ] 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.403 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.661 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.661 "name": "Existed_Raid", 00:10:13.661 "uuid": "96c1301e-98c0-4450-9231-8066cac10e27", 00:10:13.661 "strip_size_kb": 64, 00:10:13.661 "state": "configuring", 00:10:13.661 "raid_level": "raid0", 00:10:13.661 "superblock": true, 00:10:13.661 "num_base_bdevs": 4, 00:10:13.661 "num_base_bdevs_discovered": 1, 00:10:13.661 "num_base_bdevs_operational": 4, 00:10:13.661 "base_bdevs_list": [ 00:10:13.661 { 00:10:13.661 "name": "BaseBdev1", 00:10:13.661 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:13.661 "is_configured": true, 00:10:13.661 "data_offset": 2048, 00:10:13.661 "data_size": 63488 00:10:13.661 }, 00:10:13.661 { 00:10:13.661 "name": "BaseBdev2", 00:10:13.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.661 "is_configured": false, 00:10:13.661 "data_offset": 0, 00:10:13.661 "data_size": 0 00:10:13.661 }, 00:10:13.661 { 00:10:13.661 "name": "BaseBdev3", 00:10:13.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.661 "is_configured": false, 00:10:13.661 "data_offset": 0, 00:10:13.661 "data_size": 0 00:10:13.661 }, 00:10:13.661 { 00:10:13.661 "name": "BaseBdev4", 00:10:13.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.661 "is_configured": false, 00:10:13.661 "data_offset": 0, 00:10:13.661 "data_size": 0 00:10:13.661 } 00:10:13.661 ] 00:10:13.661 }' 00:10:13.661 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.661 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.921 [2024-11-28 18:50:43.357473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.921 [2024-11-28 18:50:43.357523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.921 [2024-11-28 18:50:43.369517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.921 [2024-11-28 18:50:43.371315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.921 [2024-11-28 18:50:43.371379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.921 [2024-11-28 18:50:43.371390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.921 [2024-11-28 18:50:43.371398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.921 [2024-11-28 18:50:43.371406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.921 [2024-11-28 18:50:43.371413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.921 "name": "Existed_Raid", 00:10:13.921 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:13.921 "strip_size_kb": 64, 00:10:13.921 "state": "configuring", 00:10:13.921 "raid_level": "raid0", 00:10:13.921 "superblock": true, 00:10:13.921 "num_base_bdevs": 4, 00:10:13.921 "num_base_bdevs_discovered": 1, 00:10:13.921 "num_base_bdevs_operational": 4, 00:10:13.921 "base_bdevs_list": [ 00:10:13.921 { 00:10:13.921 "name": "BaseBdev1", 00:10:13.921 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:13.921 "is_configured": true, 00:10:13.921 "data_offset": 2048, 00:10:13.921 "data_size": 63488 00:10:13.921 }, 00:10:13.921 { 00:10:13.921 "name": "BaseBdev2", 00:10:13.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.921 "is_configured": false, 00:10:13.921 "data_offset": 0, 00:10:13.921 "data_size": 0 00:10:13.921 }, 00:10:13.921 { 00:10:13.921 "name": "BaseBdev3", 00:10:13.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.921 "is_configured": false, 00:10:13.921 "data_offset": 0, 00:10:13.921 "data_size": 0 00:10:13.921 }, 00:10:13.921 { 00:10:13.921 "name": "BaseBdev4", 00:10:13.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.921 "is_configured": false, 00:10:13.921 "data_offset": 0, 00:10:13.921 "data_size": 0 00:10:13.921 } 00:10:13.921 ] 00:10:13.921 }' 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.921 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.491 [2024-11-28 18:50:43.852484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.491 BaseBdev2 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.491 [ 00:10:14.491 { 00:10:14.491 "name": "BaseBdev2", 00:10:14.491 "aliases": [ 00:10:14.491 "63773329-7b56-4071-bc6f-88aa435e59a9" 00:10:14.491 ], 00:10:14.491 "product_name": "Malloc disk", 00:10:14.491 "block_size": 512, 00:10:14.491 "num_blocks": 65536, 00:10:14.491 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:14.491 "assigned_rate_limits": { 00:10:14.491 "rw_ios_per_sec": 0, 00:10:14.491 "rw_mbytes_per_sec": 0, 00:10:14.491 "r_mbytes_per_sec": 0, 00:10:14.491 "w_mbytes_per_sec": 0 00:10:14.491 }, 00:10:14.491 "claimed": true, 00:10:14.491 "claim_type": "exclusive_write", 00:10:14.491 "zoned": false, 00:10:14.491 "supported_io_types": { 00:10:14.491 "read": true, 00:10:14.491 "write": true, 00:10:14.491 "unmap": true, 00:10:14.491 "flush": true, 00:10:14.491 "reset": true, 00:10:14.491 "nvme_admin": false, 00:10:14.491 "nvme_io": false, 00:10:14.491 "nvme_io_md": false, 00:10:14.491 "write_zeroes": true, 00:10:14.491 "zcopy": true, 00:10:14.491 "get_zone_info": false, 00:10:14.491 "zone_management": false, 00:10:14.491 "zone_append": false, 00:10:14.491 "compare": false, 00:10:14.491 "compare_and_write": false, 00:10:14.491 "abort": true, 00:10:14.491 "seek_hole": false, 00:10:14.491 "seek_data": false, 00:10:14.491 "copy": true, 00:10:14.491 "nvme_iov_md": false 00:10:14.491 }, 00:10:14.491 "memory_domains": [ 00:10:14.491 { 00:10:14.491 "dma_device_id": "system", 00:10:14.491 "dma_device_type": 1 00:10:14.491 }, 00:10:14.491 { 00:10:14.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.491 "dma_device_type": 2 00:10:14.491 } 00:10:14.491 ], 00:10:14.491 "driver_specific": {} 00:10:14.491 } 00:10:14.491 ] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.491 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.491 "name": "Existed_Raid", 00:10:14.491 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:14.491 "strip_size_kb": 64, 00:10:14.491 "state": "configuring", 00:10:14.491 "raid_level": "raid0", 00:10:14.492 "superblock": true, 00:10:14.492 "num_base_bdevs": 4, 00:10:14.492 "num_base_bdevs_discovered": 2, 00:10:14.492 "num_base_bdevs_operational": 4, 00:10:14.492 "base_bdevs_list": [ 00:10:14.492 { 00:10:14.492 "name": "BaseBdev1", 00:10:14.492 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:14.492 "is_configured": true, 00:10:14.492 "data_offset": 2048, 00:10:14.492 "data_size": 63488 00:10:14.492 }, 00:10:14.492 { 00:10:14.492 "name": "BaseBdev2", 00:10:14.492 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:14.492 "is_configured": true, 00:10:14.492 "data_offset": 2048, 00:10:14.492 "data_size": 63488 00:10:14.492 }, 00:10:14.492 { 00:10:14.492 "name": "BaseBdev3", 00:10:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.492 "is_configured": false, 00:10:14.492 "data_offset": 0, 00:10:14.492 "data_size": 0 00:10:14.492 }, 00:10:14.492 { 00:10:14.492 "name": "BaseBdev4", 00:10:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.492 "is_configured": false, 00:10:14.492 "data_offset": 0, 00:10:14.492 "data_size": 0 00:10:14.492 } 00:10:14.492 ] 00:10:14.492 }' 00:10:14.492 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.492 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.750 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.750 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.750 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.010 [2024-11-28 18:50:44.366782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.010 BaseBdev3 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.010 [ 00:10:15.010 { 00:10:15.010 "name": "BaseBdev3", 00:10:15.010 "aliases": [ 00:10:15.010 "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7" 00:10:15.010 ], 00:10:15.010 "product_name": "Malloc disk", 00:10:15.010 "block_size": 512, 00:10:15.010 "num_blocks": 65536, 00:10:15.010 "uuid": "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7", 00:10:15.010 "assigned_rate_limits": { 00:10:15.010 "rw_ios_per_sec": 0, 00:10:15.010 "rw_mbytes_per_sec": 0, 00:10:15.010 "r_mbytes_per_sec": 0, 00:10:15.010 "w_mbytes_per_sec": 0 00:10:15.010 }, 00:10:15.010 "claimed": true, 00:10:15.010 "claim_type": "exclusive_write", 00:10:15.010 "zoned": false, 00:10:15.010 "supported_io_types": { 00:10:15.010 "read": true, 00:10:15.010 "write": true, 00:10:15.010 "unmap": true, 00:10:15.010 "flush": true, 00:10:15.010 "reset": true, 00:10:15.010 "nvme_admin": false, 00:10:15.010 "nvme_io": false, 00:10:15.010 "nvme_io_md": false, 00:10:15.010 "write_zeroes": true, 00:10:15.010 "zcopy": true, 00:10:15.010 "get_zone_info": false, 00:10:15.010 "zone_management": false, 00:10:15.010 "zone_append": false, 00:10:15.010 "compare": false, 00:10:15.010 "compare_and_write": false, 00:10:15.010 "abort": true, 00:10:15.010 "seek_hole": false, 00:10:15.010 "seek_data": false, 00:10:15.010 "copy": true, 00:10:15.010 "nvme_iov_md": false 00:10:15.010 }, 00:10:15.010 "memory_domains": [ 00:10:15.010 { 00:10:15.010 "dma_device_id": "system", 00:10:15.010 "dma_device_type": 1 00:10:15.010 }, 00:10:15.010 { 00:10:15.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.010 "dma_device_type": 2 00:10:15.010 } 00:10:15.010 ], 00:10:15.010 "driver_specific": {} 00:10:15.010 } 00:10:15.010 ] 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.010 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.011 "name": "Existed_Raid", 00:10:15.011 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:15.011 "strip_size_kb": 64, 00:10:15.011 "state": "configuring", 00:10:15.011 "raid_level": "raid0", 00:10:15.011 "superblock": true, 00:10:15.011 "num_base_bdevs": 4, 00:10:15.011 "num_base_bdevs_discovered": 3, 00:10:15.011 "num_base_bdevs_operational": 4, 00:10:15.011 "base_bdevs_list": [ 00:10:15.011 { 00:10:15.011 "name": "BaseBdev1", 00:10:15.011 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:15.011 "is_configured": true, 00:10:15.011 "data_offset": 2048, 00:10:15.011 "data_size": 63488 00:10:15.011 }, 00:10:15.011 { 00:10:15.011 "name": "BaseBdev2", 00:10:15.011 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:15.011 "is_configured": true, 00:10:15.011 "data_offset": 2048, 00:10:15.011 "data_size": 63488 00:10:15.011 }, 00:10:15.011 { 00:10:15.011 "name": "BaseBdev3", 00:10:15.011 "uuid": "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7", 00:10:15.011 "is_configured": true, 00:10:15.011 "data_offset": 2048, 00:10:15.011 "data_size": 63488 00:10:15.011 }, 00:10:15.011 { 00:10:15.011 "name": "BaseBdev4", 00:10:15.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.011 "is_configured": false, 00:10:15.011 "data_offset": 0, 00:10:15.011 "data_size": 0 00:10:15.011 } 00:10:15.011 ] 00:10:15.011 }' 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.011 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.271 [2024-11-28 18:50:44.805816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.271 BaseBdev4 00:10:15.271 [2024-11-28 18:50:44.806097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:15.271 [2024-11-28 18:50:44.806126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.271 [2024-11-28 18:50:44.806421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:15.271 [2024-11-28 18:50:44.806591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:15.271 [2024-11-28 18:50:44.806602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:15.271 [2024-11-28 18:50:44.806717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.271 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.271 [ 00:10:15.271 { 00:10:15.271 "name": "BaseBdev4", 00:10:15.271 "aliases": [ 00:10:15.271 "232e80d4-c3ef-4fe7-a94b-5289f9dc25ac" 00:10:15.271 ], 00:10:15.271 "product_name": "Malloc disk", 00:10:15.271 "block_size": 512, 00:10:15.271 "num_blocks": 65536, 00:10:15.271 "uuid": "232e80d4-c3ef-4fe7-a94b-5289f9dc25ac", 00:10:15.271 "assigned_rate_limits": { 00:10:15.271 "rw_ios_per_sec": 0, 00:10:15.271 "rw_mbytes_per_sec": 0, 00:10:15.271 "r_mbytes_per_sec": 0, 00:10:15.271 "w_mbytes_per_sec": 0 00:10:15.271 }, 00:10:15.271 "claimed": true, 00:10:15.271 "claim_type": "exclusive_write", 00:10:15.271 "zoned": false, 00:10:15.271 "supported_io_types": { 00:10:15.271 "read": true, 00:10:15.271 "write": true, 00:10:15.271 "unmap": true, 00:10:15.271 "flush": true, 00:10:15.271 "reset": true, 00:10:15.271 "nvme_admin": false, 00:10:15.271 "nvme_io": false, 00:10:15.271 "nvme_io_md": false, 00:10:15.271 "write_zeroes": true, 00:10:15.271 "zcopy": true, 00:10:15.271 "get_zone_info": false, 00:10:15.271 "zone_management": false, 00:10:15.271 "zone_append": false, 00:10:15.271 "compare": false, 00:10:15.271 "compare_and_write": false, 00:10:15.271 "abort": true, 00:10:15.271 "seek_hole": false, 00:10:15.271 "seek_data": false, 00:10:15.271 "copy": true, 00:10:15.271 "nvme_iov_md": false 00:10:15.271 }, 00:10:15.271 "memory_domains": [ 00:10:15.271 { 00:10:15.271 "dma_device_id": "system", 00:10:15.271 "dma_device_type": 1 00:10:15.271 }, 00:10:15.271 { 00:10:15.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.272 "dma_device_type": 2 00:10:15.272 } 00:10:15.272 ], 00:10:15.272 "driver_specific": {} 00:10:15.272 } 00:10:15.272 ] 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.272 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.532 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.532 "name": "Existed_Raid", 00:10:15.532 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:15.532 "strip_size_kb": 64, 00:10:15.532 "state": "online", 00:10:15.532 "raid_level": "raid0", 00:10:15.532 "superblock": true, 00:10:15.532 "num_base_bdevs": 4, 00:10:15.532 "num_base_bdevs_discovered": 4, 00:10:15.532 "num_base_bdevs_operational": 4, 00:10:15.532 "base_bdevs_list": [ 00:10:15.532 { 00:10:15.532 "name": "BaseBdev1", 00:10:15.532 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:15.532 "is_configured": true, 00:10:15.532 "data_offset": 2048, 00:10:15.532 "data_size": 63488 00:10:15.532 }, 00:10:15.532 { 00:10:15.532 "name": "BaseBdev2", 00:10:15.532 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:15.532 "is_configured": true, 00:10:15.532 "data_offset": 2048, 00:10:15.532 "data_size": 63488 00:10:15.532 }, 00:10:15.532 { 00:10:15.532 "name": "BaseBdev3", 00:10:15.532 "uuid": "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7", 00:10:15.532 "is_configured": true, 00:10:15.532 "data_offset": 2048, 00:10:15.532 "data_size": 63488 00:10:15.532 }, 00:10:15.532 { 00:10:15.532 "name": "BaseBdev4", 00:10:15.532 "uuid": "232e80d4-c3ef-4fe7-a94b-5289f9dc25ac", 00:10:15.532 "is_configured": true, 00:10:15.532 "data_offset": 2048, 00:10:15.532 "data_size": 63488 00:10:15.532 } 00:10:15.532 ] 00:10:15.532 }' 00:10:15.532 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.532 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.792 [2024-11-28 18:50:45.242289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.792 "name": "Existed_Raid", 00:10:15.792 "aliases": [ 00:10:15.792 "188c4af3-35a0-41a3-a63e-2425f22139f0" 00:10:15.792 ], 00:10:15.792 "product_name": "Raid Volume", 00:10:15.792 "block_size": 512, 00:10:15.792 "num_blocks": 253952, 00:10:15.792 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:15.792 "assigned_rate_limits": { 00:10:15.792 "rw_ios_per_sec": 0, 00:10:15.792 "rw_mbytes_per_sec": 0, 00:10:15.792 "r_mbytes_per_sec": 0, 00:10:15.792 "w_mbytes_per_sec": 0 00:10:15.792 }, 00:10:15.792 "claimed": false, 00:10:15.792 "zoned": false, 00:10:15.792 "supported_io_types": { 00:10:15.792 "read": true, 00:10:15.792 "write": true, 00:10:15.792 "unmap": true, 00:10:15.792 "flush": true, 00:10:15.792 "reset": true, 00:10:15.792 "nvme_admin": false, 00:10:15.792 "nvme_io": false, 00:10:15.792 "nvme_io_md": false, 00:10:15.792 "write_zeroes": true, 00:10:15.792 "zcopy": false, 00:10:15.792 "get_zone_info": false, 00:10:15.792 "zone_management": false, 00:10:15.792 "zone_append": false, 00:10:15.792 "compare": false, 00:10:15.792 "compare_and_write": false, 00:10:15.792 "abort": false, 00:10:15.792 "seek_hole": false, 00:10:15.792 "seek_data": false, 00:10:15.792 "copy": false, 00:10:15.792 "nvme_iov_md": false 00:10:15.792 }, 00:10:15.792 "memory_domains": [ 00:10:15.792 { 00:10:15.792 "dma_device_id": "system", 00:10:15.792 "dma_device_type": 1 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.792 "dma_device_type": 2 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "system", 00:10:15.792 "dma_device_type": 1 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.792 "dma_device_type": 2 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "system", 00:10:15.792 "dma_device_type": 1 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.792 "dma_device_type": 2 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "system", 00:10:15.792 "dma_device_type": 1 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.792 "dma_device_type": 2 00:10:15.792 } 00:10:15.792 ], 00:10:15.792 "driver_specific": { 00:10:15.792 "raid": { 00:10:15.792 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:15.792 "strip_size_kb": 64, 00:10:15.792 "state": "online", 00:10:15.792 "raid_level": "raid0", 00:10:15.792 "superblock": true, 00:10:15.792 "num_base_bdevs": 4, 00:10:15.792 "num_base_bdevs_discovered": 4, 00:10:15.792 "num_base_bdevs_operational": 4, 00:10:15.792 "base_bdevs_list": [ 00:10:15.792 { 00:10:15.792 "name": "BaseBdev1", 00:10:15.792 "uuid": "fd2d209c-2e3f-47ac-b28c-5047273e967c", 00:10:15.792 "is_configured": true, 00:10:15.792 "data_offset": 2048, 00:10:15.792 "data_size": 63488 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "name": "BaseBdev2", 00:10:15.792 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:15.792 "is_configured": true, 00:10:15.792 "data_offset": 2048, 00:10:15.792 "data_size": 63488 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "name": "BaseBdev3", 00:10:15.792 "uuid": "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7", 00:10:15.792 "is_configured": true, 00:10:15.792 "data_offset": 2048, 00:10:15.792 "data_size": 63488 00:10:15.792 }, 00:10:15.792 { 00:10:15.792 "name": "BaseBdev4", 00:10:15.792 "uuid": "232e80d4-c3ef-4fe7-a94b-5289f9dc25ac", 00:10:15.792 "is_configured": true, 00:10:15.792 "data_offset": 2048, 00:10:15.792 "data_size": 63488 00:10:15.792 } 00:10:15.792 ] 00:10:15.792 } 00:10:15.792 } 00:10:15.792 }' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.792 BaseBdev2 00:10:15.792 BaseBdev3 00:10:15.792 BaseBdev4' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.792 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 [2024-11-28 18:50:45.574095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.051 [2024-11-28 18:50:45.574120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.051 [2024-11-28 18:50:45.574188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.051 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.052 "name": "Existed_Raid", 00:10:16.052 "uuid": "188c4af3-35a0-41a3-a63e-2425f22139f0", 00:10:16.052 "strip_size_kb": 64, 00:10:16.052 "state": "offline", 00:10:16.052 "raid_level": "raid0", 00:10:16.052 "superblock": true, 00:10:16.052 "num_base_bdevs": 4, 00:10:16.052 "num_base_bdevs_discovered": 3, 00:10:16.052 "num_base_bdevs_operational": 3, 00:10:16.052 "base_bdevs_list": [ 00:10:16.052 { 00:10:16.052 "name": null, 00:10:16.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.052 "is_configured": false, 00:10:16.052 "data_offset": 0, 00:10:16.052 "data_size": 63488 00:10:16.052 }, 00:10:16.052 { 00:10:16.052 "name": "BaseBdev2", 00:10:16.052 "uuid": "63773329-7b56-4071-bc6f-88aa435e59a9", 00:10:16.052 "is_configured": true, 00:10:16.052 "data_offset": 2048, 00:10:16.052 "data_size": 63488 00:10:16.052 }, 00:10:16.052 { 00:10:16.052 "name": "BaseBdev3", 00:10:16.052 "uuid": "cc61a78c-b28f-4dbe-873a-ccc7526eb0e7", 00:10:16.052 "is_configured": true, 00:10:16.052 "data_offset": 2048, 00:10:16.052 "data_size": 63488 00:10:16.052 }, 00:10:16.052 { 00:10:16.052 "name": "BaseBdev4", 00:10:16.052 "uuid": "232e80d4-c3ef-4fe7-a94b-5289f9dc25ac", 00:10:16.052 "is_configured": true, 00:10:16.052 "data_offset": 2048, 00:10:16.052 "data_size": 63488 00:10:16.052 } 00:10:16.052 ] 00:10:16.052 }' 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.052 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 [2024-11-28 18:50:46.073387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 [2024-11-28 18:50:46.148657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.622 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.622 [2024-11-28 18:50:46.219611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:16.622 [2024-11-28 18:50:46.219705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.884 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 BaseBdev2 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 [ 00:10:16.885 { 00:10:16.885 "name": "BaseBdev2", 00:10:16.885 "aliases": [ 00:10:16.885 "4198c866-f3c5-4859-8dd8-048c217e7ca6" 00:10:16.885 ], 00:10:16.885 "product_name": "Malloc disk", 00:10:16.885 "block_size": 512, 00:10:16.885 "num_blocks": 65536, 00:10:16.885 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:16.885 "assigned_rate_limits": { 00:10:16.885 "rw_ios_per_sec": 0, 00:10:16.885 "rw_mbytes_per_sec": 0, 00:10:16.885 "r_mbytes_per_sec": 0, 00:10:16.885 "w_mbytes_per_sec": 0 00:10:16.885 }, 00:10:16.885 "claimed": false, 00:10:16.885 "zoned": false, 00:10:16.885 "supported_io_types": { 00:10:16.885 "read": true, 00:10:16.885 "write": true, 00:10:16.885 "unmap": true, 00:10:16.885 "flush": true, 00:10:16.885 "reset": true, 00:10:16.885 "nvme_admin": false, 00:10:16.885 "nvme_io": false, 00:10:16.885 "nvme_io_md": false, 00:10:16.885 "write_zeroes": true, 00:10:16.885 "zcopy": true, 00:10:16.885 "get_zone_info": false, 00:10:16.885 "zone_management": false, 00:10:16.885 "zone_append": false, 00:10:16.885 "compare": false, 00:10:16.885 "compare_and_write": false, 00:10:16.885 "abort": true, 00:10:16.885 "seek_hole": false, 00:10:16.885 "seek_data": false, 00:10:16.885 "copy": true, 00:10:16.885 "nvme_iov_md": false 00:10:16.885 }, 00:10:16.885 "memory_domains": [ 00:10:16.885 { 00:10:16.885 "dma_device_id": "system", 00:10:16.885 "dma_device_type": 1 00:10:16.885 }, 00:10:16.885 { 00:10:16.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.885 "dma_device_type": 2 00:10:16.885 } 00:10:16.885 ], 00:10:16.885 "driver_specific": {} 00:10:16.885 } 00:10:16.885 ] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 BaseBdev3 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.885 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.885 [ 00:10:16.885 { 00:10:16.885 "name": "BaseBdev3", 00:10:16.885 "aliases": [ 00:10:16.885 "9a433987-9647-4258-b236-c717b76d3e04" 00:10:16.885 ], 00:10:16.885 "product_name": "Malloc disk", 00:10:16.885 "block_size": 512, 00:10:16.885 "num_blocks": 65536, 00:10:16.885 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:16.885 "assigned_rate_limits": { 00:10:16.885 "rw_ios_per_sec": 0, 00:10:16.885 "rw_mbytes_per_sec": 0, 00:10:16.885 "r_mbytes_per_sec": 0, 00:10:16.885 "w_mbytes_per_sec": 0 00:10:16.885 }, 00:10:16.885 "claimed": false, 00:10:16.885 "zoned": false, 00:10:16.885 "supported_io_types": { 00:10:16.885 "read": true, 00:10:16.885 "write": true, 00:10:16.885 "unmap": true, 00:10:16.885 "flush": true, 00:10:16.885 "reset": true, 00:10:16.885 "nvme_admin": false, 00:10:16.885 "nvme_io": false, 00:10:16.885 "nvme_io_md": false, 00:10:16.885 "write_zeroes": true, 00:10:16.885 "zcopy": true, 00:10:16.885 "get_zone_info": false, 00:10:16.885 "zone_management": false, 00:10:16.885 "zone_append": false, 00:10:16.885 "compare": false, 00:10:16.885 "compare_and_write": false, 00:10:16.885 "abort": true, 00:10:16.885 "seek_hole": false, 00:10:16.885 "seek_data": false, 00:10:16.885 "copy": true, 00:10:16.885 "nvme_iov_md": false 00:10:16.886 }, 00:10:16.886 "memory_domains": [ 00:10:16.886 { 00:10:16.886 "dma_device_id": "system", 00:10:16.886 "dma_device_type": 1 00:10:16.886 }, 00:10:16.886 { 00:10:16.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.886 "dma_device_type": 2 00:10:16.886 } 00:10:16.886 ], 00:10:16.886 "driver_specific": {} 00:10:16.886 } 00:10:16.886 ] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 BaseBdev4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 [ 00:10:16.886 { 00:10:16.886 "name": "BaseBdev4", 00:10:16.886 "aliases": [ 00:10:16.886 "c88c02ae-6764-475b-a763-746263e3c70b" 00:10:16.886 ], 00:10:16.886 "product_name": "Malloc disk", 00:10:16.886 "block_size": 512, 00:10:16.886 "num_blocks": 65536, 00:10:16.886 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:16.886 "assigned_rate_limits": { 00:10:16.886 "rw_ios_per_sec": 0, 00:10:16.886 "rw_mbytes_per_sec": 0, 00:10:16.886 "r_mbytes_per_sec": 0, 00:10:16.886 "w_mbytes_per_sec": 0 00:10:16.886 }, 00:10:16.886 "claimed": false, 00:10:16.886 "zoned": false, 00:10:16.886 "supported_io_types": { 00:10:16.886 "read": true, 00:10:16.886 "write": true, 00:10:16.886 "unmap": true, 00:10:16.886 "flush": true, 00:10:16.886 "reset": true, 00:10:16.886 "nvme_admin": false, 00:10:16.886 "nvme_io": false, 00:10:16.886 "nvme_io_md": false, 00:10:16.886 "write_zeroes": true, 00:10:16.886 "zcopy": true, 00:10:16.886 "get_zone_info": false, 00:10:16.886 "zone_management": false, 00:10:16.886 "zone_append": false, 00:10:16.886 "compare": false, 00:10:16.886 "compare_and_write": false, 00:10:16.886 "abort": true, 00:10:16.886 "seek_hole": false, 00:10:16.886 "seek_data": false, 00:10:16.886 "copy": true, 00:10:16.886 "nvme_iov_md": false 00:10:16.886 }, 00:10:16.886 "memory_domains": [ 00:10:16.886 { 00:10:16.886 "dma_device_id": "system", 00:10:16.886 "dma_device_type": 1 00:10:16.886 }, 00:10:16.886 { 00:10:16.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.886 "dma_device_type": 2 00:10:16.886 } 00:10:16.886 ], 00:10:16.886 "driver_specific": {} 00:10:16.886 } 00:10:16.886 ] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.886 [2024-11-28 18:50:46.451423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.886 [2024-11-28 18:50:46.451476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.886 [2024-11-28 18:50:46.451496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.886 [2024-11-28 18:50:46.453268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.886 [2024-11-28 18:50:46.453315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.886 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.887 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.887 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.887 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.154 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.154 "name": "Existed_Raid", 00:10:17.154 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:17.154 "strip_size_kb": 64, 00:10:17.154 "state": "configuring", 00:10:17.154 "raid_level": "raid0", 00:10:17.154 "superblock": true, 00:10:17.154 "num_base_bdevs": 4, 00:10:17.154 "num_base_bdevs_discovered": 3, 00:10:17.154 "num_base_bdevs_operational": 4, 00:10:17.154 "base_bdevs_list": [ 00:10:17.154 { 00:10:17.154 "name": "BaseBdev1", 00:10:17.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.154 "is_configured": false, 00:10:17.154 "data_offset": 0, 00:10:17.154 "data_size": 0 00:10:17.154 }, 00:10:17.154 { 00:10:17.154 "name": "BaseBdev2", 00:10:17.154 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:17.154 "is_configured": true, 00:10:17.155 "data_offset": 2048, 00:10:17.155 "data_size": 63488 00:10:17.155 }, 00:10:17.155 { 00:10:17.155 "name": "BaseBdev3", 00:10:17.155 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:17.155 "is_configured": true, 00:10:17.155 "data_offset": 2048, 00:10:17.155 "data_size": 63488 00:10:17.155 }, 00:10:17.155 { 00:10:17.155 "name": "BaseBdev4", 00:10:17.155 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:17.155 "is_configured": true, 00:10:17.155 "data_offset": 2048, 00:10:17.155 "data_size": 63488 00:10:17.155 } 00:10:17.155 ] 00:10:17.155 }' 00:10:17.155 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.155 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 [2024-11-28 18:50:46.871547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.419 "name": "Existed_Raid", 00:10:17.419 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:17.419 "strip_size_kb": 64, 00:10:17.419 "state": "configuring", 00:10:17.419 "raid_level": "raid0", 00:10:17.419 "superblock": true, 00:10:17.419 "num_base_bdevs": 4, 00:10:17.419 "num_base_bdevs_discovered": 2, 00:10:17.419 "num_base_bdevs_operational": 4, 00:10:17.419 "base_bdevs_list": [ 00:10:17.419 { 00:10:17.419 "name": "BaseBdev1", 00:10:17.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.419 "is_configured": false, 00:10:17.419 "data_offset": 0, 00:10:17.419 "data_size": 0 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": null, 00:10:17.419 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:17.419 "is_configured": false, 00:10:17.419 "data_offset": 0, 00:10:17.419 "data_size": 63488 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": "BaseBdev3", 00:10:17.419 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:17.419 "is_configured": true, 00:10:17.419 "data_offset": 2048, 00:10:17.419 "data_size": 63488 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": "BaseBdev4", 00:10:17.419 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:17.419 "is_configured": true, 00:10:17.419 "data_offset": 2048, 00:10:17.419 "data_size": 63488 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 }' 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.419 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 [2024-11-28 18:50:47.338614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.988 BaseBdev1 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 [ 00:10:17.988 { 00:10:17.988 "name": "BaseBdev1", 00:10:17.988 "aliases": [ 00:10:17.988 "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6" 00:10:17.988 ], 00:10:17.988 "product_name": "Malloc disk", 00:10:17.988 "block_size": 512, 00:10:17.988 "num_blocks": 65536, 00:10:17.988 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:17.988 "assigned_rate_limits": { 00:10:17.988 "rw_ios_per_sec": 0, 00:10:17.988 "rw_mbytes_per_sec": 0, 00:10:17.988 "r_mbytes_per_sec": 0, 00:10:17.988 "w_mbytes_per_sec": 0 00:10:17.988 }, 00:10:17.988 "claimed": true, 00:10:17.988 "claim_type": "exclusive_write", 00:10:17.988 "zoned": false, 00:10:17.988 "supported_io_types": { 00:10:17.988 "read": true, 00:10:17.988 "write": true, 00:10:17.988 "unmap": true, 00:10:17.988 "flush": true, 00:10:17.988 "reset": true, 00:10:17.988 "nvme_admin": false, 00:10:17.988 "nvme_io": false, 00:10:17.988 "nvme_io_md": false, 00:10:17.988 "write_zeroes": true, 00:10:17.988 "zcopy": true, 00:10:17.988 "get_zone_info": false, 00:10:17.988 "zone_management": false, 00:10:17.988 "zone_append": false, 00:10:17.988 "compare": false, 00:10:17.988 "compare_and_write": false, 00:10:17.988 "abort": true, 00:10:17.988 "seek_hole": false, 00:10:17.988 "seek_data": false, 00:10:17.988 "copy": true, 00:10:17.988 "nvme_iov_md": false 00:10:17.988 }, 00:10:17.988 "memory_domains": [ 00:10:17.988 { 00:10:17.988 "dma_device_id": "system", 00:10:17.988 "dma_device_type": 1 00:10:17.988 }, 00:10:17.988 { 00:10:17.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.988 "dma_device_type": 2 00:10:17.988 } 00:10:17.988 ], 00:10:17.988 "driver_specific": {} 00:10:17.988 } 00:10:17.988 ] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.988 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.989 "name": "Existed_Raid", 00:10:17.989 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:17.989 "strip_size_kb": 64, 00:10:17.989 "state": "configuring", 00:10:17.989 "raid_level": "raid0", 00:10:17.989 "superblock": true, 00:10:17.989 "num_base_bdevs": 4, 00:10:17.989 "num_base_bdevs_discovered": 3, 00:10:17.989 "num_base_bdevs_operational": 4, 00:10:17.989 "base_bdevs_list": [ 00:10:17.989 { 00:10:17.989 "name": "BaseBdev1", 00:10:17.989 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:17.989 "is_configured": true, 00:10:17.989 "data_offset": 2048, 00:10:17.989 "data_size": 63488 00:10:17.989 }, 00:10:17.989 { 00:10:17.989 "name": null, 00:10:17.989 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:17.989 "is_configured": false, 00:10:17.989 "data_offset": 0, 00:10:17.989 "data_size": 63488 00:10:17.989 }, 00:10:17.989 { 00:10:17.989 "name": "BaseBdev3", 00:10:17.989 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:17.989 "is_configured": true, 00:10:17.989 "data_offset": 2048, 00:10:17.989 "data_size": 63488 00:10:17.989 }, 00:10:17.989 { 00:10:17.989 "name": "BaseBdev4", 00:10:17.989 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:17.989 "is_configured": true, 00:10:17.989 "data_offset": 2048, 00:10:17.989 "data_size": 63488 00:10:17.989 } 00:10:17.989 ] 00:10:17.989 }' 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.989 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.248 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.248 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.248 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.248 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 [2024-11-28 18:50:47.890803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.508 "name": "Existed_Raid", 00:10:18.508 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:18.508 "strip_size_kb": 64, 00:10:18.508 "state": "configuring", 00:10:18.508 "raid_level": "raid0", 00:10:18.508 "superblock": true, 00:10:18.508 "num_base_bdevs": 4, 00:10:18.509 "num_base_bdevs_discovered": 2, 00:10:18.509 "num_base_bdevs_operational": 4, 00:10:18.509 "base_bdevs_list": [ 00:10:18.509 { 00:10:18.509 "name": "BaseBdev1", 00:10:18.509 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:18.509 "is_configured": true, 00:10:18.509 "data_offset": 2048, 00:10:18.509 "data_size": 63488 00:10:18.509 }, 00:10:18.509 { 00:10:18.509 "name": null, 00:10:18.509 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:18.509 "is_configured": false, 00:10:18.509 "data_offset": 0, 00:10:18.509 "data_size": 63488 00:10:18.509 }, 00:10:18.509 { 00:10:18.509 "name": null, 00:10:18.509 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:18.509 "is_configured": false, 00:10:18.509 "data_offset": 0, 00:10:18.509 "data_size": 63488 00:10:18.509 }, 00:10:18.509 { 00:10:18.509 "name": "BaseBdev4", 00:10:18.509 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:18.509 "is_configured": true, 00:10:18.509 "data_offset": 2048, 00:10:18.509 "data_size": 63488 00:10:18.509 } 00:10:18.509 ] 00:10:18.509 }' 00:10:18.509 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.509 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.769 [2024-11-28 18:50:48.354980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.769 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.028 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.028 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.028 "name": "Existed_Raid", 00:10:19.028 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:19.028 "strip_size_kb": 64, 00:10:19.028 "state": "configuring", 00:10:19.028 "raid_level": "raid0", 00:10:19.028 "superblock": true, 00:10:19.028 "num_base_bdevs": 4, 00:10:19.028 "num_base_bdevs_discovered": 3, 00:10:19.028 "num_base_bdevs_operational": 4, 00:10:19.028 "base_bdevs_list": [ 00:10:19.028 { 00:10:19.028 "name": "BaseBdev1", 00:10:19.028 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:19.028 "is_configured": true, 00:10:19.029 "data_offset": 2048, 00:10:19.029 "data_size": 63488 00:10:19.029 }, 00:10:19.029 { 00:10:19.029 "name": null, 00:10:19.029 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:19.029 "is_configured": false, 00:10:19.029 "data_offset": 0, 00:10:19.029 "data_size": 63488 00:10:19.029 }, 00:10:19.029 { 00:10:19.029 "name": "BaseBdev3", 00:10:19.029 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:19.029 "is_configured": true, 00:10:19.029 "data_offset": 2048, 00:10:19.029 "data_size": 63488 00:10:19.029 }, 00:10:19.029 { 00:10:19.029 "name": "BaseBdev4", 00:10:19.029 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:19.029 "is_configured": true, 00:10:19.029 "data_offset": 2048, 00:10:19.029 "data_size": 63488 00:10:19.029 } 00:10:19.029 ] 00:10:19.029 }' 00:10:19.029 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.029 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.289 [2024-11-28 18:50:48.823113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.289 "name": "Existed_Raid", 00:10:19.289 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:19.289 "strip_size_kb": 64, 00:10:19.289 "state": "configuring", 00:10:19.289 "raid_level": "raid0", 00:10:19.289 "superblock": true, 00:10:19.289 "num_base_bdevs": 4, 00:10:19.289 "num_base_bdevs_discovered": 2, 00:10:19.289 "num_base_bdevs_operational": 4, 00:10:19.289 "base_bdevs_list": [ 00:10:19.289 { 00:10:19.289 "name": null, 00:10:19.289 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:19.289 "is_configured": false, 00:10:19.289 "data_offset": 0, 00:10:19.289 "data_size": 63488 00:10:19.289 }, 00:10:19.289 { 00:10:19.289 "name": null, 00:10:19.289 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:19.289 "is_configured": false, 00:10:19.289 "data_offset": 0, 00:10:19.289 "data_size": 63488 00:10:19.289 }, 00:10:19.289 { 00:10:19.289 "name": "BaseBdev3", 00:10:19.289 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:19.289 "is_configured": true, 00:10:19.289 "data_offset": 2048, 00:10:19.289 "data_size": 63488 00:10:19.289 }, 00:10:19.289 { 00:10:19.289 "name": "BaseBdev4", 00:10:19.289 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:19.289 "is_configured": true, 00:10:19.289 "data_offset": 2048, 00:10:19.289 "data_size": 63488 00:10:19.289 } 00:10:19.289 ] 00:10:19.289 }' 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.289 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.858 [2024-11-28 18:50:49.321794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.858 "name": "Existed_Raid", 00:10:19.858 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:19.858 "strip_size_kb": 64, 00:10:19.858 "state": "configuring", 00:10:19.858 "raid_level": "raid0", 00:10:19.858 "superblock": true, 00:10:19.858 "num_base_bdevs": 4, 00:10:19.858 "num_base_bdevs_discovered": 3, 00:10:19.858 "num_base_bdevs_operational": 4, 00:10:19.858 "base_bdevs_list": [ 00:10:19.858 { 00:10:19.858 "name": null, 00:10:19.858 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:19.858 "is_configured": false, 00:10:19.858 "data_offset": 0, 00:10:19.858 "data_size": 63488 00:10:19.858 }, 00:10:19.858 { 00:10:19.858 "name": "BaseBdev2", 00:10:19.858 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:19.858 "is_configured": true, 00:10:19.858 "data_offset": 2048, 00:10:19.858 "data_size": 63488 00:10:19.858 }, 00:10:19.858 { 00:10:19.858 "name": "BaseBdev3", 00:10:19.858 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:19.858 "is_configured": true, 00:10:19.858 "data_offset": 2048, 00:10:19.858 "data_size": 63488 00:10:19.858 }, 00:10:19.858 { 00:10:19.858 "name": "BaseBdev4", 00:10:19.858 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:19.858 "is_configured": true, 00:10:19.858 "data_offset": 2048, 00:10:19.858 "data_size": 63488 00:10:19.858 } 00:10:19.858 ] 00:10:19.858 }' 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.858 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.117 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.117 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.117 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.117 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.377 [2024-11-28 18:50:49.816844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.377 [2024-11-28 18:50:49.817032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.377 [2024-11-28 18:50:49.817049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.377 [2024-11-28 18:50:49.817287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:20.377 NewBaseBdev 00:10:20.377 [2024-11-28 18:50:49.817400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.377 [2024-11-28 18:50:49.817408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.377 [2024-11-28 18:50:49.817520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.377 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.377 [ 00:10:20.377 { 00:10:20.377 "name": "NewBaseBdev", 00:10:20.377 "aliases": [ 00:10:20.377 "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6" 00:10:20.377 ], 00:10:20.377 "product_name": "Malloc disk", 00:10:20.377 "block_size": 512, 00:10:20.377 "num_blocks": 65536, 00:10:20.377 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:20.377 "assigned_rate_limits": { 00:10:20.377 "rw_ios_per_sec": 0, 00:10:20.377 "rw_mbytes_per_sec": 0, 00:10:20.377 "r_mbytes_per_sec": 0, 00:10:20.377 "w_mbytes_per_sec": 0 00:10:20.377 }, 00:10:20.377 "claimed": true, 00:10:20.377 "claim_type": "exclusive_write", 00:10:20.377 "zoned": false, 00:10:20.377 "supported_io_types": { 00:10:20.377 "read": true, 00:10:20.377 "write": true, 00:10:20.377 "unmap": true, 00:10:20.377 "flush": true, 00:10:20.377 "reset": true, 00:10:20.377 "nvme_admin": false, 00:10:20.377 "nvme_io": false, 00:10:20.377 "nvme_io_md": false, 00:10:20.377 "write_zeroes": true, 00:10:20.377 "zcopy": true, 00:10:20.377 "get_zone_info": false, 00:10:20.377 "zone_management": false, 00:10:20.378 "zone_append": false, 00:10:20.378 "compare": false, 00:10:20.378 "compare_and_write": false, 00:10:20.378 "abort": true, 00:10:20.378 "seek_hole": false, 00:10:20.378 "seek_data": false, 00:10:20.378 "copy": true, 00:10:20.378 "nvme_iov_md": false 00:10:20.378 }, 00:10:20.378 "memory_domains": [ 00:10:20.378 { 00:10:20.378 "dma_device_id": "system", 00:10:20.378 "dma_device_type": 1 00:10:20.378 }, 00:10:20.378 { 00:10:20.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.378 "dma_device_type": 2 00:10:20.378 } 00:10:20.378 ], 00:10:20.378 "driver_specific": {} 00:10:20.378 } 00:10:20.378 ] 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.378 "name": "Existed_Raid", 00:10:20.378 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:20.378 "strip_size_kb": 64, 00:10:20.378 "state": "online", 00:10:20.378 "raid_level": "raid0", 00:10:20.378 "superblock": true, 00:10:20.378 "num_base_bdevs": 4, 00:10:20.378 "num_base_bdevs_discovered": 4, 00:10:20.378 "num_base_bdevs_operational": 4, 00:10:20.378 "base_bdevs_list": [ 00:10:20.378 { 00:10:20.378 "name": "NewBaseBdev", 00:10:20.378 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:20.378 "is_configured": true, 00:10:20.378 "data_offset": 2048, 00:10:20.378 "data_size": 63488 00:10:20.378 }, 00:10:20.378 { 00:10:20.378 "name": "BaseBdev2", 00:10:20.378 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:20.378 "is_configured": true, 00:10:20.378 "data_offset": 2048, 00:10:20.378 "data_size": 63488 00:10:20.378 }, 00:10:20.378 { 00:10:20.378 "name": "BaseBdev3", 00:10:20.378 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:20.378 "is_configured": true, 00:10:20.378 "data_offset": 2048, 00:10:20.378 "data_size": 63488 00:10:20.378 }, 00:10:20.378 { 00:10:20.378 "name": "BaseBdev4", 00:10:20.378 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:20.378 "is_configured": true, 00:10:20.378 "data_offset": 2048, 00:10:20.378 "data_size": 63488 00:10:20.378 } 00:10:20.378 ] 00:10:20.378 }' 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.378 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.945 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.946 [2024-11-28 18:50:50.285285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.946 "name": "Existed_Raid", 00:10:20.946 "aliases": [ 00:10:20.946 "78dc1b69-04ad-4fae-9214-f20aa4724ca5" 00:10:20.946 ], 00:10:20.946 "product_name": "Raid Volume", 00:10:20.946 "block_size": 512, 00:10:20.946 "num_blocks": 253952, 00:10:20.946 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:20.946 "assigned_rate_limits": { 00:10:20.946 "rw_ios_per_sec": 0, 00:10:20.946 "rw_mbytes_per_sec": 0, 00:10:20.946 "r_mbytes_per_sec": 0, 00:10:20.946 "w_mbytes_per_sec": 0 00:10:20.946 }, 00:10:20.946 "claimed": false, 00:10:20.946 "zoned": false, 00:10:20.946 "supported_io_types": { 00:10:20.946 "read": true, 00:10:20.946 "write": true, 00:10:20.946 "unmap": true, 00:10:20.946 "flush": true, 00:10:20.946 "reset": true, 00:10:20.946 "nvme_admin": false, 00:10:20.946 "nvme_io": false, 00:10:20.946 "nvme_io_md": false, 00:10:20.946 "write_zeroes": true, 00:10:20.946 "zcopy": false, 00:10:20.946 "get_zone_info": false, 00:10:20.946 "zone_management": false, 00:10:20.946 "zone_append": false, 00:10:20.946 "compare": false, 00:10:20.946 "compare_and_write": false, 00:10:20.946 "abort": false, 00:10:20.946 "seek_hole": false, 00:10:20.946 "seek_data": false, 00:10:20.946 "copy": false, 00:10:20.946 "nvme_iov_md": false 00:10:20.946 }, 00:10:20.946 "memory_domains": [ 00:10:20.946 { 00:10:20.946 "dma_device_id": "system", 00:10:20.946 "dma_device_type": 1 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.946 "dma_device_type": 2 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "system", 00:10:20.946 "dma_device_type": 1 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.946 "dma_device_type": 2 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "system", 00:10:20.946 "dma_device_type": 1 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.946 "dma_device_type": 2 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "system", 00:10:20.946 "dma_device_type": 1 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.946 "dma_device_type": 2 00:10:20.946 } 00:10:20.946 ], 00:10:20.946 "driver_specific": { 00:10:20.946 "raid": { 00:10:20.946 "uuid": "78dc1b69-04ad-4fae-9214-f20aa4724ca5", 00:10:20.946 "strip_size_kb": 64, 00:10:20.946 "state": "online", 00:10:20.946 "raid_level": "raid0", 00:10:20.946 "superblock": true, 00:10:20.946 "num_base_bdevs": 4, 00:10:20.946 "num_base_bdevs_discovered": 4, 00:10:20.946 "num_base_bdevs_operational": 4, 00:10:20.946 "base_bdevs_list": [ 00:10:20.946 { 00:10:20.946 "name": "NewBaseBdev", 00:10:20.946 "uuid": "9f1ce7a6-325d-4ba4-85c8-ffa0e40d95f6", 00:10:20.946 "is_configured": true, 00:10:20.946 "data_offset": 2048, 00:10:20.946 "data_size": 63488 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "name": "BaseBdev2", 00:10:20.946 "uuid": "4198c866-f3c5-4859-8dd8-048c217e7ca6", 00:10:20.946 "is_configured": true, 00:10:20.946 "data_offset": 2048, 00:10:20.946 "data_size": 63488 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "name": "BaseBdev3", 00:10:20.946 "uuid": "9a433987-9647-4258-b236-c717b76d3e04", 00:10:20.946 "is_configured": true, 00:10:20.946 "data_offset": 2048, 00:10:20.946 "data_size": 63488 00:10:20.946 }, 00:10:20.946 { 00:10:20.946 "name": "BaseBdev4", 00:10:20.946 "uuid": "c88c02ae-6764-475b-a763-746263e3c70b", 00:10:20.946 "is_configured": true, 00:10:20.946 "data_offset": 2048, 00:10:20.946 "data_size": 63488 00:10:20.946 } 00:10:20.946 ] 00:10:20.946 } 00:10:20.946 } 00:10:20.946 }' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:20.946 BaseBdev2 00:10:20.946 BaseBdev3 00:10:20.946 BaseBdev4' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.946 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.206 [2024-11-28 18:50:50.609084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.206 [2024-11-28 18:50:50.609110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.206 [2024-11-28 18:50:50.609183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.206 [2024-11-28 18:50:50.609246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.206 [2024-11-28 18:50:50.609262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82521 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82521 ']' 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82521 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:21.206 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82521 00:10:21.207 killing process with pid 82521 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82521' 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82521 00:10:21.207 [2024-11-28 18:50:50.654807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.207 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82521 00:10:21.207 [2024-11-28 18:50:50.694245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.467 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.467 00:10:21.467 real 0m9.362s 00:10:21.467 user 0m16.078s 00:10:21.467 sys 0m1.871s 00:10:21.467 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.467 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.467 ************************************ 00:10:21.467 END TEST raid_state_function_test_sb 00:10:21.467 ************************************ 00:10:21.467 18:50:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:21.467 18:50:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.467 18:50:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.467 18:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.467 ************************************ 00:10:21.467 START TEST raid_superblock_test 00:10:21.467 ************************************ 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83169 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83169 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83169 ']' 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.467 18:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.727 [2024-11-28 18:50:51.079122] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:21.727 [2024-11-28 18:50:51.079273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83169 ] 00:10:21.727 [2024-11-28 18:50:51.213783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:21.727 [2024-11-28 18:50:51.254230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.727 [2024-11-28 18:50:51.278931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.727 [2024-11-28 18:50:51.320499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.727 [2024-11-28 18:50:51.320627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.297 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 malloc1 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 [2024-11-28 18:50:51.920902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.558 [2024-11-28 18:50:51.921011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.558 [2024-11-28 18:50:51.921056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.558 [2024-11-28 18:50:51.921100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.558 [2024-11-28 18:50:51.923175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.558 [2024-11-28 18:50:51.923244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.558 pt1 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 malloc2 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 [2024-11-28 18:50:51.953390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.558 [2024-11-28 18:50:51.953502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.558 [2024-11-28 18:50:51.953537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.558 [2024-11-28 18:50:51.953565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.558 [2024-11-28 18:50:51.955570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.558 [2024-11-28 18:50:51.955650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.558 pt2 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 malloc3 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.558 [2024-11-28 18:50:51.981716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.558 [2024-11-28 18:50:51.981813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.558 [2024-11-28 18:50:51.981848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.558 [2024-11-28 18:50:51.981875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.558 [2024-11-28 18:50:51.983897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.558 [2024-11-28 18:50:51.983965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.558 pt3 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:22.558 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.559 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.559 malloc4 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.559 [2024-11-28 18:50:52.034232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:22.559 [2024-11-28 18:50:52.034453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.559 [2024-11-28 18:50:52.034555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:22.559 [2024-11-28 18:50:52.034634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.559 [2024-11-28 18:50:52.038278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.559 [2024-11-28 18:50:52.038379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:22.559 pt4 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.559 [2024-11-28 18:50:52.046653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.559 [2024-11-28 18:50:52.048684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.559 [2024-11-28 18:50:52.048843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.559 [2024-11-28 18:50:52.048900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:22.559 [2024-11-28 18:50:52.049066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:22.559 [2024-11-28 18:50:52.049084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.559 [2024-11-28 18:50:52.049365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:22.559 [2024-11-28 18:50:52.049524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:22.559 [2024-11-28 18:50:52.049542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:22.559 [2024-11-28 18:50:52.049657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.559 "name": "raid_bdev1", 00:10:22.559 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:22.559 "strip_size_kb": 64, 00:10:22.559 "state": "online", 00:10:22.559 "raid_level": "raid0", 00:10:22.559 "superblock": true, 00:10:22.559 "num_base_bdevs": 4, 00:10:22.559 "num_base_bdevs_discovered": 4, 00:10:22.559 "num_base_bdevs_operational": 4, 00:10:22.559 "base_bdevs_list": [ 00:10:22.559 { 00:10:22.559 "name": "pt1", 00:10:22.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.559 "is_configured": true, 00:10:22.559 "data_offset": 2048, 00:10:22.559 "data_size": 63488 00:10:22.559 }, 00:10:22.559 { 00:10:22.559 "name": "pt2", 00:10:22.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.559 "is_configured": true, 00:10:22.559 "data_offset": 2048, 00:10:22.559 "data_size": 63488 00:10:22.559 }, 00:10:22.559 { 00:10:22.559 "name": "pt3", 00:10:22.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.559 "is_configured": true, 00:10:22.559 "data_offset": 2048, 00:10:22.559 "data_size": 63488 00:10:22.559 }, 00:10:22.559 { 00:10:22.559 "name": "pt4", 00:10:22.559 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.559 "is_configured": true, 00:10:22.559 "data_offset": 2048, 00:10:22.559 "data_size": 63488 00:10:22.559 } 00:10:22.559 ] 00:10:22.559 }' 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.559 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.128 [2024-11-28 18:50:52.511019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.128 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.128 "name": "raid_bdev1", 00:10:23.128 "aliases": [ 00:10:23.128 "dd9e4567-2383-4feb-ac1d-6d28e4acb721" 00:10:23.128 ], 00:10:23.128 "product_name": "Raid Volume", 00:10:23.128 "block_size": 512, 00:10:23.128 "num_blocks": 253952, 00:10:23.128 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:23.128 "assigned_rate_limits": { 00:10:23.128 "rw_ios_per_sec": 0, 00:10:23.128 "rw_mbytes_per_sec": 0, 00:10:23.128 "r_mbytes_per_sec": 0, 00:10:23.128 "w_mbytes_per_sec": 0 00:10:23.128 }, 00:10:23.128 "claimed": false, 00:10:23.128 "zoned": false, 00:10:23.128 "supported_io_types": { 00:10:23.128 "read": true, 00:10:23.128 "write": true, 00:10:23.128 "unmap": true, 00:10:23.128 "flush": true, 00:10:23.128 "reset": true, 00:10:23.128 "nvme_admin": false, 00:10:23.128 "nvme_io": false, 00:10:23.128 "nvme_io_md": false, 00:10:23.128 "write_zeroes": true, 00:10:23.128 "zcopy": false, 00:10:23.128 "get_zone_info": false, 00:10:23.128 "zone_management": false, 00:10:23.128 "zone_append": false, 00:10:23.128 "compare": false, 00:10:23.128 "compare_and_write": false, 00:10:23.128 "abort": false, 00:10:23.128 "seek_hole": false, 00:10:23.128 "seek_data": false, 00:10:23.128 "copy": false, 00:10:23.128 "nvme_iov_md": false 00:10:23.128 }, 00:10:23.128 "memory_domains": [ 00:10:23.128 { 00:10:23.128 "dma_device_id": "system", 00:10:23.128 "dma_device_type": 1 00:10:23.128 }, 00:10:23.128 { 00:10:23.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.128 "dma_device_type": 2 00:10:23.128 }, 00:10:23.128 { 00:10:23.128 "dma_device_id": "system", 00:10:23.128 "dma_device_type": 1 00:10:23.128 }, 00:10:23.128 { 00:10:23.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.128 "dma_device_type": 2 00:10:23.128 }, 00:10:23.128 { 00:10:23.128 "dma_device_id": "system", 00:10:23.128 "dma_device_type": 1 00:10:23.128 }, 00:10:23.128 { 00:10:23.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.128 "dma_device_type": 2 00:10:23.129 }, 00:10:23.129 { 00:10:23.129 "dma_device_id": "system", 00:10:23.129 "dma_device_type": 1 00:10:23.129 }, 00:10:23.129 { 00:10:23.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.129 "dma_device_type": 2 00:10:23.129 } 00:10:23.129 ], 00:10:23.129 "driver_specific": { 00:10:23.129 "raid": { 00:10:23.129 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:23.129 "strip_size_kb": 64, 00:10:23.129 "state": "online", 00:10:23.129 "raid_level": "raid0", 00:10:23.129 "superblock": true, 00:10:23.129 "num_base_bdevs": 4, 00:10:23.129 "num_base_bdevs_discovered": 4, 00:10:23.129 "num_base_bdevs_operational": 4, 00:10:23.129 "base_bdevs_list": [ 00:10:23.129 { 00:10:23.129 "name": "pt1", 00:10:23.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.129 "is_configured": true, 00:10:23.129 "data_offset": 2048, 00:10:23.129 "data_size": 63488 00:10:23.129 }, 00:10:23.129 { 00:10:23.129 "name": "pt2", 00:10:23.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.129 "is_configured": true, 00:10:23.129 "data_offset": 2048, 00:10:23.129 "data_size": 63488 00:10:23.129 }, 00:10:23.129 { 00:10:23.129 "name": "pt3", 00:10:23.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.129 "is_configured": true, 00:10:23.129 "data_offset": 2048, 00:10:23.129 "data_size": 63488 00:10:23.129 }, 00:10:23.129 { 00:10:23.129 "name": "pt4", 00:10:23.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.129 "is_configured": true, 00:10:23.129 "data_offset": 2048, 00:10:23.129 "data_size": 63488 00:10:23.129 } 00:10:23.129 ] 00:10:23.129 } 00:10:23.129 } 00:10:23.129 }' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.129 pt2 00:10:23.129 pt3 00:10:23.129 pt4' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.129 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 [2024-11-28 18:50:52.827059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dd9e4567-2383-4feb-ac1d-6d28e4acb721 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dd9e4567-2383-4feb-ac1d-6d28e4acb721 ']' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 [2024-11-28 18:50:52.858780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.389 [2024-11-28 18:50:52.858803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.389 [2024-11-28 18:50:52.858879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.389 [2024-11-28 18:50:52.858954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.389 [2024-11-28 18:50:52.858967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:23.389 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.390 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.651 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 [2024-11-28 18:50:53.002865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.651 [2024-11-28 18:50:53.004747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.651 [2024-11-28 18:50:53.004825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.651 [2024-11-28 18:50:53.004882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:23.651 [2024-11-28 18:50:53.004951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.651 [2024-11-28 18:50:53.005034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.651 [2024-11-28 18:50:53.005095] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.651 [2024-11-28 18:50:53.005151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:23.651 [2024-11-28 18:50:53.005204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.651 [2024-11-28 18:50:53.005235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:23.651 request: 00:10:23.651 { 00:10:23.651 "name": "raid_bdev1", 00:10:23.651 "raid_level": "raid0", 00:10:23.651 "base_bdevs": [ 00:10:23.651 "malloc1", 00:10:23.651 "malloc2", 00:10:23.651 "malloc3", 00:10:23.651 "malloc4" 00:10:23.651 ], 00:10:23.651 "strip_size_kb": 64, 00:10:23.651 "superblock": false, 00:10:23.651 "method": "bdev_raid_create", 00:10:23.651 "req_id": 1 00:10:23.651 } 00:10:23.651 Got JSON-RPC error response 00:10:23.651 response: 00:10:23.651 { 00:10:23.651 "code": -17, 00:10:23.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.651 } 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 [2024-11-28 18:50:53.070852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.651 [2024-11-28 18:50:53.070942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.651 [2024-11-28 18:50:53.070974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:23.651 [2024-11-28 18:50:53.071006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.651 [2024-11-28 18:50:53.073123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.651 [2024-11-28 18:50:53.073210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.651 [2024-11-28 18:50:53.073304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:23.651 [2024-11-28 18:50:53.073361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.651 pt1 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.651 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.652 "name": "raid_bdev1", 00:10:23.652 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:23.652 "strip_size_kb": 64, 00:10:23.652 "state": "configuring", 00:10:23.652 "raid_level": "raid0", 00:10:23.652 "superblock": true, 00:10:23.652 "num_base_bdevs": 4, 00:10:23.652 "num_base_bdevs_discovered": 1, 00:10:23.652 "num_base_bdevs_operational": 4, 00:10:23.652 "base_bdevs_list": [ 00:10:23.652 { 00:10:23.652 "name": "pt1", 00:10:23.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.652 "is_configured": true, 00:10:23.652 "data_offset": 2048, 00:10:23.652 "data_size": 63488 00:10:23.652 }, 00:10:23.652 { 00:10:23.652 "name": null, 00:10:23.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.652 "is_configured": false, 00:10:23.652 "data_offset": 2048, 00:10:23.652 "data_size": 63488 00:10:23.652 }, 00:10:23.652 { 00:10:23.652 "name": null, 00:10:23.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.652 "is_configured": false, 00:10:23.652 "data_offset": 2048, 00:10:23.652 "data_size": 63488 00:10:23.652 }, 00:10:23.652 { 00:10:23.652 "name": null, 00:10:23.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.652 "is_configured": false, 00:10:23.652 "data_offset": 2048, 00:10:23.652 "data_size": 63488 00:10:23.652 } 00:10:23.652 ] 00:10:23.652 }' 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.652 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.912 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:23.912 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.912 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.912 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.172 [2024-11-28 18:50:53.518981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.172 [2024-11-28 18:50:53.519036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.172 [2024-11-28 18:50:53.519054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:24.172 [2024-11-28 18:50:53.519064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.172 [2024-11-28 18:50:53.519468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.172 [2024-11-28 18:50:53.519499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.172 [2024-11-28 18:50:53.519568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.172 [2024-11-28 18:50:53.519591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.172 pt2 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.172 [2024-11-28 18:50:53.526980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.172 "name": "raid_bdev1", 00:10:24.172 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:24.172 "strip_size_kb": 64, 00:10:24.172 "state": "configuring", 00:10:24.172 "raid_level": "raid0", 00:10:24.172 "superblock": true, 00:10:24.172 "num_base_bdevs": 4, 00:10:24.172 "num_base_bdevs_discovered": 1, 00:10:24.172 "num_base_bdevs_operational": 4, 00:10:24.172 "base_bdevs_list": [ 00:10:24.172 { 00:10:24.172 "name": "pt1", 00:10:24.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.172 "is_configured": true, 00:10:24.172 "data_offset": 2048, 00:10:24.172 "data_size": 63488 00:10:24.172 }, 00:10:24.172 { 00:10:24.172 "name": null, 00:10:24.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.172 "is_configured": false, 00:10:24.172 "data_offset": 0, 00:10:24.172 "data_size": 63488 00:10:24.172 }, 00:10:24.172 { 00:10:24.172 "name": null, 00:10:24.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.172 "is_configured": false, 00:10:24.172 "data_offset": 2048, 00:10:24.172 "data_size": 63488 00:10:24.172 }, 00:10:24.172 { 00:10:24.172 "name": null, 00:10:24.172 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.172 "is_configured": false, 00:10:24.172 "data_offset": 2048, 00:10:24.172 "data_size": 63488 00:10:24.172 } 00:10:24.172 ] 00:10:24.172 }' 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.172 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.432 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:24.432 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.432 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.432 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.432 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.432 [2024-11-28 18:50:53.947107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.432 [2024-11-28 18:50:53.947203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.432 [2024-11-28 18:50:53.947239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:24.432 [2024-11-28 18:50:53.947266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.432 [2024-11-28 18:50:53.947696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.433 [2024-11-28 18:50:53.947752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.433 [2024-11-28 18:50:53.947848] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.433 [2024-11-28 18:50:53.947894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.433 pt2 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.433 [2024-11-28 18:50:53.959089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.433 [2024-11-28 18:50:53.959134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.433 [2024-11-28 18:50:53.959150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:24.433 [2024-11-28 18:50:53.959157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.433 [2024-11-28 18:50:53.959510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.433 [2024-11-28 18:50:53.959528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.433 [2024-11-28 18:50:53.959579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.433 [2024-11-28 18:50:53.959602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.433 pt3 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.433 [2024-11-28 18:50:53.971088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.433 [2024-11-28 18:50:53.971129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.433 [2024-11-28 18:50:53.971144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:24.433 [2024-11-28 18:50:53.971152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.433 [2024-11-28 18:50:53.971480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.433 [2024-11-28 18:50:53.971497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.433 [2024-11-28 18:50:53.971549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:24.433 [2024-11-28 18:50:53.971566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.433 [2024-11-28 18:50:53.971669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:24.433 [2024-11-28 18:50:53.971679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.433 [2024-11-28 18:50:53.971902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:24.433 [2024-11-28 18:50:53.972021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:24.433 [2024-11-28 18:50:53.972032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:24.433 [2024-11-28 18:50:53.972122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.433 pt4 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.433 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.433 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.433 "name": "raid_bdev1", 00:10:24.433 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:24.433 "strip_size_kb": 64, 00:10:24.433 "state": "online", 00:10:24.433 "raid_level": "raid0", 00:10:24.433 "superblock": true, 00:10:24.433 "num_base_bdevs": 4, 00:10:24.433 "num_base_bdevs_discovered": 4, 00:10:24.433 "num_base_bdevs_operational": 4, 00:10:24.433 "base_bdevs_list": [ 00:10:24.433 { 00:10:24.433 "name": "pt1", 00:10:24.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.433 "is_configured": true, 00:10:24.433 "data_offset": 2048, 00:10:24.433 "data_size": 63488 00:10:24.433 }, 00:10:24.433 { 00:10:24.433 "name": "pt2", 00:10:24.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.433 "is_configured": true, 00:10:24.433 "data_offset": 2048, 00:10:24.433 "data_size": 63488 00:10:24.433 }, 00:10:24.433 { 00:10:24.433 "name": "pt3", 00:10:24.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.433 "is_configured": true, 00:10:24.433 "data_offset": 2048, 00:10:24.433 "data_size": 63488 00:10:24.433 }, 00:10:24.433 { 00:10:24.433 "name": "pt4", 00:10:24.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.433 "is_configured": true, 00:10:24.433 "data_offset": 2048, 00:10:24.433 "data_size": 63488 00:10:24.433 } 00:10:24.433 ] 00:10:24.433 }' 00:10:24.433 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.433 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 [2024-11-28 18:50:54.407530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.003 "name": "raid_bdev1", 00:10:25.003 "aliases": [ 00:10:25.003 "dd9e4567-2383-4feb-ac1d-6d28e4acb721" 00:10:25.003 ], 00:10:25.003 "product_name": "Raid Volume", 00:10:25.003 "block_size": 512, 00:10:25.003 "num_blocks": 253952, 00:10:25.003 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:25.003 "assigned_rate_limits": { 00:10:25.003 "rw_ios_per_sec": 0, 00:10:25.003 "rw_mbytes_per_sec": 0, 00:10:25.003 "r_mbytes_per_sec": 0, 00:10:25.003 "w_mbytes_per_sec": 0 00:10:25.003 }, 00:10:25.003 "claimed": false, 00:10:25.003 "zoned": false, 00:10:25.003 "supported_io_types": { 00:10:25.003 "read": true, 00:10:25.003 "write": true, 00:10:25.003 "unmap": true, 00:10:25.003 "flush": true, 00:10:25.003 "reset": true, 00:10:25.003 "nvme_admin": false, 00:10:25.003 "nvme_io": false, 00:10:25.003 "nvme_io_md": false, 00:10:25.003 "write_zeroes": true, 00:10:25.003 "zcopy": false, 00:10:25.003 "get_zone_info": false, 00:10:25.003 "zone_management": false, 00:10:25.003 "zone_append": false, 00:10:25.003 "compare": false, 00:10:25.003 "compare_and_write": false, 00:10:25.003 "abort": false, 00:10:25.003 "seek_hole": false, 00:10:25.003 "seek_data": false, 00:10:25.003 "copy": false, 00:10:25.003 "nvme_iov_md": false 00:10:25.003 }, 00:10:25.003 "memory_domains": [ 00:10:25.003 { 00:10:25.003 "dma_device_id": "system", 00:10:25.003 "dma_device_type": 1 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.003 "dma_device_type": 2 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "system", 00:10:25.003 "dma_device_type": 1 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.003 "dma_device_type": 2 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "system", 00:10:25.003 "dma_device_type": 1 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.003 "dma_device_type": 2 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "system", 00:10:25.003 "dma_device_type": 1 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.003 "dma_device_type": 2 00:10:25.003 } 00:10:25.003 ], 00:10:25.003 "driver_specific": { 00:10:25.003 "raid": { 00:10:25.003 "uuid": "dd9e4567-2383-4feb-ac1d-6d28e4acb721", 00:10:25.003 "strip_size_kb": 64, 00:10:25.003 "state": "online", 00:10:25.003 "raid_level": "raid0", 00:10:25.003 "superblock": true, 00:10:25.003 "num_base_bdevs": 4, 00:10:25.003 "num_base_bdevs_discovered": 4, 00:10:25.003 "num_base_bdevs_operational": 4, 00:10:25.003 "base_bdevs_list": [ 00:10:25.003 { 00:10:25.003 "name": "pt1", 00:10:25.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.003 "is_configured": true, 00:10:25.003 "data_offset": 2048, 00:10:25.003 "data_size": 63488 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "name": "pt2", 00:10:25.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.003 "is_configured": true, 00:10:25.003 "data_offset": 2048, 00:10:25.003 "data_size": 63488 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "name": "pt3", 00:10:25.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.003 "is_configured": true, 00:10:25.003 "data_offset": 2048, 00:10:25.003 "data_size": 63488 00:10:25.003 }, 00:10:25.003 { 00:10:25.003 "name": "pt4", 00:10:25.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.003 "is_configured": true, 00:10:25.003 "data_offset": 2048, 00:10:25.003 "data_size": 63488 00:10:25.003 } 00:10:25.003 ] 00:10:25.003 } 00:10:25.003 } 00:10:25.003 }' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.003 pt2 00:10:25.003 pt3 00:10:25.003 pt4' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.003 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.004 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.004 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.004 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.264 [2024-11-28 18:50:54.747605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dd9e4567-2383-4feb-ac1d-6d28e4acb721 '!=' dd9e4567-2383-4feb-ac1d-6d28e4acb721 ']' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83169 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83169 ']' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83169 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83169 00:10:25.264 killing process with pid 83169 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83169' 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83169 00:10:25.264 [2024-11-28 18:50:54.812648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.264 [2024-11-28 18:50:54.812723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.264 [2024-11-28 18:50:54.812799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.264 [2024-11-28 18:50:54.812808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:25.264 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83169 00:10:25.264 [2024-11-28 18:50:54.854666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.525 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:25.525 00:10:25.525 real 0m4.083s 00:10:25.525 user 0m6.405s 00:10:25.525 sys 0m0.898s 00:10:25.525 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.525 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.525 ************************************ 00:10:25.525 END TEST raid_superblock_test 00:10:25.525 ************************************ 00:10:25.784 18:50:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:25.784 18:50:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.784 18:50:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.784 18:50:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.784 ************************************ 00:10:25.784 START TEST raid_read_error_test 00:10:25.784 ************************************ 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.784 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.blpiZPHUi8 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83417 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83417 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83417 ']' 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.785 18:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.785 [2024-11-28 18:50:55.247462] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:25.785 [2024-11-28 18:50:55.247675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83417 ] 00:10:25.785 [2024-11-28 18:50:55.382668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:26.044 [2024-11-28 18:50:55.419072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.044 [2024-11-28 18:50:55.444022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.044 [2024-11-28 18:50:55.485636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.044 [2024-11-28 18:50:55.485676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.615 BaseBdev1_malloc 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.615 true 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.615 [2024-11-28 18:50:56.093497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.615 [2024-11-28 18:50:56.093549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.615 [2024-11-28 18:50:56.093565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.615 [2024-11-28 18:50:56.093578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.615 [2024-11-28 18:50:56.095612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.615 [2024-11-28 18:50:56.095715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.615 BaseBdev1 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.615 BaseBdev2_malloc 00:10:26.615 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 true 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 [2024-11-28 18:50:56.134105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.616 [2024-11-28 18:50:56.134207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.616 [2024-11-28 18:50:56.134227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.616 [2024-11-28 18:50:56.134237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.616 [2024-11-28 18:50:56.136221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.616 [2024-11-28 18:50:56.136261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.616 BaseBdev2 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 BaseBdev3_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 true 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 [2024-11-28 18:50:56.174411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:26.616 [2024-11-28 18:50:56.174477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.616 [2024-11-28 18:50:56.174493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:26.616 [2024-11-28 18:50:56.174503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.616 [2024-11-28 18:50:56.176487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.616 [2024-11-28 18:50:56.176522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:26.616 BaseBdev3 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 BaseBdev4_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.616 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.877 true 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.877 [2024-11-28 18:50:56.230840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:26.877 [2024-11-28 18:50:56.230911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.877 [2024-11-28 18:50:56.230929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:26.877 [2024-11-28 18:50:56.230940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.877 [2024-11-28 18:50:56.233097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.877 [2024-11-28 18:50:56.233140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:26.877 BaseBdev4 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.877 [2024-11-28 18:50:56.242841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.877 [2024-11-28 18:50:56.244686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.877 [2024-11-28 18:50:56.244760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.877 [2024-11-28 18:50:56.244825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.877 [2024-11-28 18:50:56.245019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.877 [2024-11-28 18:50:56.245033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.877 [2024-11-28 18:50:56.245254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:26.877 [2024-11-28 18:50:56.245402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.877 [2024-11-28 18:50:56.245412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:26.877 [2024-11-28 18:50:56.245547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.877 "name": "raid_bdev1", 00:10:26.877 "uuid": "7b1aa3c9-62fe-4c38-8169-cfa2af60a874", 00:10:26.877 "strip_size_kb": 64, 00:10:26.877 "state": "online", 00:10:26.877 "raid_level": "raid0", 00:10:26.877 "superblock": true, 00:10:26.877 "num_base_bdevs": 4, 00:10:26.877 "num_base_bdevs_discovered": 4, 00:10:26.877 "num_base_bdevs_operational": 4, 00:10:26.877 "base_bdevs_list": [ 00:10:26.877 { 00:10:26.877 "name": "BaseBdev1", 00:10:26.877 "uuid": "8990b0d1-bb0b-58a8-af25-4139f2ccbb03", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 }, 00:10:26.877 { 00:10:26.877 "name": "BaseBdev2", 00:10:26.877 "uuid": "84bc1c98-b30c-5296-b8c9-771bce616928", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 }, 00:10:26.877 { 00:10:26.877 "name": "BaseBdev3", 00:10:26.877 "uuid": "30ccd5d9-43db-58ba-8be8-2c2820c6d965", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 }, 00:10:26.877 { 00:10:26.877 "name": "BaseBdev4", 00:10:26.877 "uuid": "4aff8e29-2ba6-53af-b62c-f05c2ae78005", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 } 00:10:26.877 ] 00:10:26.877 }' 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.877 18:50:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.137 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.137 18:50:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.400 [2024-11-28 18:50:56.767314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.339 "name": "raid_bdev1", 00:10:28.339 "uuid": "7b1aa3c9-62fe-4c38-8169-cfa2af60a874", 00:10:28.339 "strip_size_kb": 64, 00:10:28.339 "state": "online", 00:10:28.339 "raid_level": "raid0", 00:10:28.339 "superblock": true, 00:10:28.339 "num_base_bdevs": 4, 00:10:28.339 "num_base_bdevs_discovered": 4, 00:10:28.339 "num_base_bdevs_operational": 4, 00:10:28.339 "base_bdevs_list": [ 00:10:28.339 { 00:10:28.339 "name": "BaseBdev1", 00:10:28.339 "uuid": "8990b0d1-bb0b-58a8-af25-4139f2ccbb03", 00:10:28.339 "is_configured": true, 00:10:28.339 "data_offset": 2048, 00:10:28.339 "data_size": 63488 00:10:28.339 }, 00:10:28.339 { 00:10:28.339 "name": "BaseBdev2", 00:10:28.339 "uuid": "84bc1c98-b30c-5296-b8c9-771bce616928", 00:10:28.339 "is_configured": true, 00:10:28.339 "data_offset": 2048, 00:10:28.339 "data_size": 63488 00:10:28.339 }, 00:10:28.339 { 00:10:28.339 "name": "BaseBdev3", 00:10:28.339 "uuid": "30ccd5d9-43db-58ba-8be8-2c2820c6d965", 00:10:28.339 "is_configured": true, 00:10:28.339 "data_offset": 2048, 00:10:28.339 "data_size": 63488 00:10:28.339 }, 00:10:28.339 { 00:10:28.339 "name": "BaseBdev4", 00:10:28.339 "uuid": "4aff8e29-2ba6-53af-b62c-f05c2ae78005", 00:10:28.339 "is_configured": true, 00:10:28.339 "data_offset": 2048, 00:10:28.339 "data_size": 63488 00:10:28.339 } 00:10:28.339 ] 00:10:28.339 }' 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.339 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 [2024-11-28 18:50:58.161769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.598 [2024-11-28 18:50:58.161871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.598 [2024-11-28 18:50:58.164457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.598 [2024-11-28 18:50:58.164556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.598 [2024-11-28 18:50:58.164619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.598 [2024-11-28 18:50:58.164663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:28.598 { 00:10:28.598 "results": [ 00:10:28.598 { 00:10:28.598 "job": "raid_bdev1", 00:10:28.598 "core_mask": "0x1", 00:10:28.598 "workload": "randrw", 00:10:28.598 "percentage": 50, 00:10:28.598 "status": "finished", 00:10:28.598 "queue_depth": 1, 00:10:28.598 "io_size": 131072, 00:10:28.598 "runtime": 1.39263, 00:10:28.598 "iops": 16765.400716629687, 00:10:28.598 "mibps": 2095.675089578711, 00:10:28.598 "io_failed": 1, 00:10:28.598 "io_timeout": 0, 00:10:28.598 "avg_latency_us": 82.40811538834161, 00:10:28.598 "min_latency_us": 24.87928179203347, 00:10:28.598 "max_latency_us": 1320.9448269850955 00:10:28.598 } 00:10:28.598 ], 00:10:28.598 "core_count": 1 00:10:28.598 } 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83417 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83417 ']' 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83417 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.598 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83417 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83417' 00:10:28.858 killing process with pid 83417 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83417 00:10:28.858 [2024-11-28 18:50:58.210901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83417 00:10:28.858 [2024-11-28 18:50:58.245089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.blpiZPHUi8 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.858 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:29.126 ************************************ 00:10:29.126 END TEST raid_read_error_test 00:10:29.126 ************************************ 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:29.126 00:10:29.126 real 0m3.324s 00:10:29.126 user 0m4.183s 00:10:29.126 sys 0m0.541s 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.126 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.126 18:50:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:29.126 18:50:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.126 18:50:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.126 18:50:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.126 ************************************ 00:10:29.126 START TEST raid_write_error_test 00:10:29.126 ************************************ 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uFAeE6fVMv 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83552 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83552 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83552 ']' 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.126 18:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.126 [2024-11-28 18:50:58.642985] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:29.126 [2024-11-28 18:50:58.643208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83552 ] 00:10:29.452 [2024-11-28 18:50:58.776130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:29.452 [2024-11-28 18:50:58.803551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.452 [2024-11-28 18:50:58.828682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.452 [2024-11-28 18:50:58.870592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.452 [2024-11-28 18:50:58.870709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 BaseBdev1_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 true 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 [2024-11-28 18:50:59.498719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.020 [2024-11-28 18:50:59.498847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.020 [2024-11-28 18:50:59.498870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.020 [2024-11-28 18:50:59.498882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.020 [2024-11-28 18:50:59.500957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.020 [2024-11-28 18:50:59.501007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.020 BaseBdev1 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 BaseBdev2_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 true 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 [2024-11-28 18:50:59.539096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.020 [2024-11-28 18:50:59.539144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.020 [2024-11-28 18:50:59.539159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.020 [2024-11-28 18:50:59.539169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.020 [2024-11-28 18:50:59.541195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.020 [2024-11-28 18:50:59.541298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.020 BaseBdev2 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 BaseBdev3_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 true 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 [2024-11-28 18:50:59.579499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.020 [2024-11-28 18:50:59.579543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.020 [2024-11-28 18:50:59.579558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.020 [2024-11-28 18:50:59.579569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.020 [2024-11-28 18:50:59.581539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.020 [2024-11-28 18:50:59.581624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.020 BaseBdev3 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.020 BaseBdev4_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.020 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 true 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 [2024-11-28 18:50:59.638257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:30.279 [2024-11-28 18:50:59.638317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.279 [2024-11-28 18:50:59.638337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:30.279 [2024-11-28 18:50:59.638349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.279 [2024-11-28 18:50:59.640767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.279 [2024-11-28 18:50:59.640863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:30.279 BaseBdev4 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 [2024-11-28 18:50:59.650297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.279 [2024-11-28 18:50:59.652069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.279 [2024-11-28 18:50:59.652187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.279 [2024-11-28 18:50:59.652245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.279 [2024-11-28 18:50:59.652460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.279 [2024-11-28 18:50:59.652476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.279 [2024-11-28 18:50:59.652699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:30.279 [2024-11-28 18:50:59.652829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.279 [2024-11-28 18:50:59.652838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:30.279 [2024-11-28 18:50:59.652957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.279 "name": "raid_bdev1", 00:10:30.279 "uuid": "83651698-c36b-47f4-933e-ad4f6a530026", 00:10:30.279 "strip_size_kb": 64, 00:10:30.279 "state": "online", 00:10:30.279 "raid_level": "raid0", 00:10:30.279 "superblock": true, 00:10:30.279 "num_base_bdevs": 4, 00:10:30.279 "num_base_bdevs_discovered": 4, 00:10:30.279 "num_base_bdevs_operational": 4, 00:10:30.279 "base_bdevs_list": [ 00:10:30.279 { 00:10:30.279 "name": "BaseBdev1", 00:10:30.279 "uuid": "d5d0b3b3-0d1a-530e-b0f9-b729ffe3fd93", 00:10:30.279 "is_configured": true, 00:10:30.279 "data_offset": 2048, 00:10:30.279 "data_size": 63488 00:10:30.279 }, 00:10:30.279 { 00:10:30.279 "name": "BaseBdev2", 00:10:30.279 "uuid": "03b8bb8a-5c3b-58f1-9697-98e341d62350", 00:10:30.279 "is_configured": true, 00:10:30.279 "data_offset": 2048, 00:10:30.279 "data_size": 63488 00:10:30.279 }, 00:10:30.279 { 00:10:30.279 "name": "BaseBdev3", 00:10:30.279 "uuid": "5fd08b83-5f56-5510-a99c-a7179ce41cc5", 00:10:30.279 "is_configured": true, 00:10:30.279 "data_offset": 2048, 00:10:30.279 "data_size": 63488 00:10:30.279 }, 00:10:30.279 { 00:10:30.279 "name": "BaseBdev4", 00:10:30.279 "uuid": "76908e47-f430-56a0-bcf1-c4a41909417a", 00:10:30.279 "is_configured": true, 00:10:30.279 "data_offset": 2048, 00:10:30.279 "data_size": 63488 00:10:30.279 } 00:10:30.279 ] 00:10:30.279 }' 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.279 18:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.537 18:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.537 18:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.795 [2024-11-28 18:51:00.206776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.734 "name": "raid_bdev1", 00:10:31.734 "uuid": "83651698-c36b-47f4-933e-ad4f6a530026", 00:10:31.734 "strip_size_kb": 64, 00:10:31.734 "state": "online", 00:10:31.734 "raid_level": "raid0", 00:10:31.734 "superblock": true, 00:10:31.734 "num_base_bdevs": 4, 00:10:31.734 "num_base_bdevs_discovered": 4, 00:10:31.734 "num_base_bdevs_operational": 4, 00:10:31.734 "base_bdevs_list": [ 00:10:31.734 { 00:10:31.734 "name": "BaseBdev1", 00:10:31.734 "uuid": "d5d0b3b3-0d1a-530e-b0f9-b729ffe3fd93", 00:10:31.734 "is_configured": true, 00:10:31.734 "data_offset": 2048, 00:10:31.734 "data_size": 63488 00:10:31.734 }, 00:10:31.734 { 00:10:31.734 "name": "BaseBdev2", 00:10:31.734 "uuid": "03b8bb8a-5c3b-58f1-9697-98e341d62350", 00:10:31.734 "is_configured": true, 00:10:31.734 "data_offset": 2048, 00:10:31.734 "data_size": 63488 00:10:31.734 }, 00:10:31.734 { 00:10:31.734 "name": "BaseBdev3", 00:10:31.734 "uuid": "5fd08b83-5f56-5510-a99c-a7179ce41cc5", 00:10:31.734 "is_configured": true, 00:10:31.734 "data_offset": 2048, 00:10:31.734 "data_size": 63488 00:10:31.734 }, 00:10:31.734 { 00:10:31.734 "name": "BaseBdev4", 00:10:31.734 "uuid": "76908e47-f430-56a0-bcf1-c4a41909417a", 00:10:31.734 "is_configured": true, 00:10:31.734 "data_offset": 2048, 00:10:31.734 "data_size": 63488 00:10:31.734 } 00:10:31.734 ] 00:10:31.734 }' 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.734 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.994 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.994 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.994 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.994 [2024-11-28 18:51:01.593493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.994 [2024-11-28 18:51:01.593531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.994 [2024-11-28 18:51:01.596214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.994 [2024-11-28 18:51:01.596277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.994 [2024-11-28 18:51:01.596321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.994 [2024-11-28 18:51:01.596333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.254 { 00:10:32.254 "results": [ 00:10:32.254 { 00:10:32.254 "job": "raid_bdev1", 00:10:32.254 "core_mask": "0x1", 00:10:32.254 "workload": "randrw", 00:10:32.254 "percentage": 50, 00:10:32.254 "status": "finished", 00:10:32.254 "queue_depth": 1, 00:10:32.254 "io_size": 131072, 00:10:32.254 "runtime": 1.384794, 00:10:32.254 "iops": 16728.84197938466, 00:10:32.254 "mibps": 2091.1052474230823, 00:10:32.254 "io_failed": 1, 00:10:32.254 "io_timeout": 0, 00:10:32.254 "avg_latency_us": 82.64403507890033, 00:10:32.254 "min_latency_us": 24.990848078096402, 00:10:32.254 "max_latency_us": 1356.646038525233 00:10:32.254 } 00:10:32.254 ], 00:10:32.254 "core_count": 1 00:10:32.254 } 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83552 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83552 ']' 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83552 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83552 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83552' 00:10:32.254 killing process with pid 83552 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83552 00:10:32.254 [2024-11-28 18:51:01.642448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.254 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83552 00:10:32.254 [2024-11-28 18:51:01.676936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uFAeE6fVMv 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:32.515 00:10:32.515 real 0m3.351s 00:10:32.515 user 0m4.246s 00:10:32.515 sys 0m0.549s 00:10:32.515 ************************************ 00:10:32.515 END TEST raid_write_error_test 00:10:32.515 ************************************ 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.515 18:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.515 18:51:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.515 18:51:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:32.515 18:51:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.515 18:51:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.515 18:51:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.515 ************************************ 00:10:32.515 START TEST raid_state_function_test 00:10:32.515 ************************************ 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83679 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.515 Process raid pid: 83679 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83679' 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83679 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83679 ']' 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.515 18:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.515 [2024-11-28 18:51:02.062971] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:32.515 [2024-11-28 18:51:02.063202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.775 [2024-11-28 18:51:02.198672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:32.775 [2024-11-28 18:51:02.238111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.775 [2024-11-28 18:51:02.264409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.775 [2024-11-28 18:51:02.307214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.775 [2024-11-28 18:51:02.307243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.344 [2024-11-28 18:51:02.886629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.344 [2024-11-28 18:51:02.886683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.344 [2024-11-28 18:51:02.886695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.344 [2024-11-28 18:51:02.886703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.344 [2024-11-28 18:51:02.886712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.344 [2024-11-28 18:51:02.886719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.344 [2024-11-28 18:51:02.886727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.344 [2024-11-28 18:51:02.886734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.344 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.344 "name": "Existed_Raid", 00:10:33.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.344 "strip_size_kb": 64, 00:10:33.344 "state": "configuring", 00:10:33.344 "raid_level": "concat", 00:10:33.344 "superblock": false, 00:10:33.344 "num_base_bdevs": 4, 00:10:33.344 "num_base_bdevs_discovered": 0, 00:10:33.344 "num_base_bdevs_operational": 4, 00:10:33.344 "base_bdevs_list": [ 00:10:33.344 { 00:10:33.344 "name": "BaseBdev1", 00:10:33.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.344 "is_configured": false, 00:10:33.344 "data_offset": 0, 00:10:33.344 "data_size": 0 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "name": "BaseBdev2", 00:10:33.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.344 "is_configured": false, 00:10:33.344 "data_offset": 0, 00:10:33.344 "data_size": 0 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "name": "BaseBdev3", 00:10:33.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.344 "is_configured": false, 00:10:33.344 "data_offset": 0, 00:10:33.344 "data_size": 0 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "name": "BaseBdev4", 00:10:33.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.344 "is_configured": false, 00:10:33.344 "data_offset": 0, 00:10:33.344 "data_size": 0 00:10:33.344 } 00:10:33.345 ] 00:10:33.345 }' 00:10:33.345 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.345 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.913 [2024-11-28 18:51:03.290640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.913 [2024-11-28 18:51:03.290721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.913 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.913 [2024-11-28 18:51:03.302676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.913 [2024-11-28 18:51:03.302746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.913 [2024-11-28 18:51:03.302777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.914 [2024-11-28 18:51:03.302799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.914 [2024-11-28 18:51:03.302881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.914 [2024-11-28 18:51:03.302902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.914 [2024-11-28 18:51:03.302936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.914 [2024-11-28 18:51:03.302968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 [2024-11-28 18:51:03.327500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.914 BaseBdev1 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 [ 00:10:33.914 { 00:10:33.914 "name": "BaseBdev1", 00:10:33.914 "aliases": [ 00:10:33.914 "4cfeabe9-340f-4942-a870-24b5a235a1cb" 00:10:33.914 ], 00:10:33.914 "product_name": "Malloc disk", 00:10:33.914 "block_size": 512, 00:10:33.914 "num_blocks": 65536, 00:10:33.914 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:33.914 "assigned_rate_limits": { 00:10:33.914 "rw_ios_per_sec": 0, 00:10:33.914 "rw_mbytes_per_sec": 0, 00:10:33.914 "r_mbytes_per_sec": 0, 00:10:33.914 "w_mbytes_per_sec": 0 00:10:33.914 }, 00:10:33.914 "claimed": true, 00:10:33.914 "claim_type": "exclusive_write", 00:10:33.914 "zoned": false, 00:10:33.914 "supported_io_types": { 00:10:33.914 "read": true, 00:10:33.914 "write": true, 00:10:33.914 "unmap": true, 00:10:33.914 "flush": true, 00:10:33.914 "reset": true, 00:10:33.914 "nvme_admin": false, 00:10:33.914 "nvme_io": false, 00:10:33.914 "nvme_io_md": false, 00:10:33.914 "write_zeroes": true, 00:10:33.914 "zcopy": true, 00:10:33.914 "get_zone_info": false, 00:10:33.914 "zone_management": false, 00:10:33.914 "zone_append": false, 00:10:33.914 "compare": false, 00:10:33.914 "compare_and_write": false, 00:10:33.914 "abort": true, 00:10:33.914 "seek_hole": false, 00:10:33.914 "seek_data": false, 00:10:33.914 "copy": true, 00:10:33.914 "nvme_iov_md": false 00:10:33.914 }, 00:10:33.914 "memory_domains": [ 00:10:33.914 { 00:10:33.914 "dma_device_id": "system", 00:10:33.914 "dma_device_type": 1 00:10:33.914 }, 00:10:33.914 { 00:10:33.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.914 "dma_device_type": 2 00:10:33.914 } 00:10:33.914 ], 00:10:33.914 "driver_specific": {} 00:10:33.914 } 00:10:33.914 ] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.914 "name": "Existed_Raid", 00:10:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.914 "strip_size_kb": 64, 00:10:33.914 "state": "configuring", 00:10:33.914 "raid_level": "concat", 00:10:33.914 "superblock": false, 00:10:33.914 "num_base_bdevs": 4, 00:10:33.914 "num_base_bdevs_discovered": 1, 00:10:33.914 "num_base_bdevs_operational": 4, 00:10:33.914 "base_bdevs_list": [ 00:10:33.914 { 00:10:33.914 "name": "BaseBdev1", 00:10:33.914 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:33.914 "is_configured": true, 00:10:33.914 "data_offset": 0, 00:10:33.914 "data_size": 65536 00:10:33.914 }, 00:10:33.914 { 00:10:33.914 "name": "BaseBdev2", 00:10:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.914 "is_configured": false, 00:10:33.914 "data_offset": 0, 00:10:33.914 "data_size": 0 00:10:33.914 }, 00:10:33.914 { 00:10:33.914 "name": "BaseBdev3", 00:10:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.914 "is_configured": false, 00:10:33.914 "data_offset": 0, 00:10:33.914 "data_size": 0 00:10:33.914 }, 00:10:33.914 { 00:10:33.914 "name": "BaseBdev4", 00:10:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.914 "is_configured": false, 00:10:33.914 "data_offset": 0, 00:10:33.914 "data_size": 0 00:10:33.914 } 00:10:33.915 ] 00:10:33.915 }' 00:10:33.915 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.915 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.484 [2024-11-28 18:51:03.807681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.484 [2024-11-28 18:51:03.807733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.484 [2024-11-28 18:51:03.819731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.484 [2024-11-28 18:51:03.821541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.484 [2024-11-28 18:51:03.821579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.484 [2024-11-28 18:51:03.821590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.484 [2024-11-28 18:51:03.821597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.484 [2024-11-28 18:51:03.821604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.484 [2024-11-28 18:51:03.821610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.484 "name": "Existed_Raid", 00:10:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.484 "strip_size_kb": 64, 00:10:34.484 "state": "configuring", 00:10:34.484 "raid_level": "concat", 00:10:34.484 "superblock": false, 00:10:34.484 "num_base_bdevs": 4, 00:10:34.484 "num_base_bdevs_discovered": 1, 00:10:34.484 "num_base_bdevs_operational": 4, 00:10:34.484 "base_bdevs_list": [ 00:10:34.484 { 00:10:34.484 "name": "BaseBdev1", 00:10:34.484 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:34.484 "is_configured": true, 00:10:34.484 "data_offset": 0, 00:10:34.484 "data_size": 65536 00:10:34.484 }, 00:10:34.484 { 00:10:34.484 "name": "BaseBdev2", 00:10:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.484 "is_configured": false, 00:10:34.484 "data_offset": 0, 00:10:34.484 "data_size": 0 00:10:34.484 }, 00:10:34.484 { 00:10:34.484 "name": "BaseBdev3", 00:10:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.484 "is_configured": false, 00:10:34.484 "data_offset": 0, 00:10:34.484 "data_size": 0 00:10:34.484 }, 00:10:34.484 { 00:10:34.484 "name": "BaseBdev4", 00:10:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.484 "is_configured": false, 00:10:34.484 "data_offset": 0, 00:10:34.484 "data_size": 0 00:10:34.484 } 00:10:34.484 ] 00:10:34.484 }' 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.484 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 [2024-11-28 18:51:04.278875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.744 BaseBdev2 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.744 [ 00:10:34.744 { 00:10:34.744 "name": "BaseBdev2", 00:10:34.744 "aliases": [ 00:10:34.744 "1435eaab-848e-4129-8209-cb1a8a4902df" 00:10:34.744 ], 00:10:34.744 "product_name": "Malloc disk", 00:10:34.744 "block_size": 512, 00:10:34.744 "num_blocks": 65536, 00:10:34.744 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:34.744 "assigned_rate_limits": { 00:10:34.744 "rw_ios_per_sec": 0, 00:10:34.744 "rw_mbytes_per_sec": 0, 00:10:34.744 "r_mbytes_per_sec": 0, 00:10:34.744 "w_mbytes_per_sec": 0 00:10:34.744 }, 00:10:34.744 "claimed": true, 00:10:34.744 "claim_type": "exclusive_write", 00:10:34.744 "zoned": false, 00:10:34.744 "supported_io_types": { 00:10:34.744 "read": true, 00:10:34.744 "write": true, 00:10:34.744 "unmap": true, 00:10:34.744 "flush": true, 00:10:34.744 "reset": true, 00:10:34.744 "nvme_admin": false, 00:10:34.744 "nvme_io": false, 00:10:34.744 "nvme_io_md": false, 00:10:34.744 "write_zeroes": true, 00:10:34.744 "zcopy": true, 00:10:34.744 "get_zone_info": false, 00:10:34.744 "zone_management": false, 00:10:34.744 "zone_append": false, 00:10:34.744 "compare": false, 00:10:34.744 "compare_and_write": false, 00:10:34.744 "abort": true, 00:10:34.744 "seek_hole": false, 00:10:34.744 "seek_data": false, 00:10:34.744 "copy": true, 00:10:34.744 "nvme_iov_md": false 00:10:34.744 }, 00:10:34.744 "memory_domains": [ 00:10:34.744 { 00:10:34.744 "dma_device_id": "system", 00:10:34.744 "dma_device_type": 1 00:10:34.744 }, 00:10:34.744 { 00:10:34.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.744 "dma_device_type": 2 00:10:34.744 } 00:10:34.744 ], 00:10:34.744 "driver_specific": {} 00:10:34.744 } 00:10:34.744 ] 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.744 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.745 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.004 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.004 "name": "Existed_Raid", 00:10:35.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.004 "strip_size_kb": 64, 00:10:35.004 "state": "configuring", 00:10:35.004 "raid_level": "concat", 00:10:35.004 "superblock": false, 00:10:35.004 "num_base_bdevs": 4, 00:10:35.004 "num_base_bdevs_discovered": 2, 00:10:35.004 "num_base_bdevs_operational": 4, 00:10:35.004 "base_bdevs_list": [ 00:10:35.004 { 00:10:35.004 "name": "BaseBdev1", 00:10:35.004 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:35.004 "is_configured": true, 00:10:35.004 "data_offset": 0, 00:10:35.004 "data_size": 65536 00:10:35.004 }, 00:10:35.004 { 00:10:35.004 "name": "BaseBdev2", 00:10:35.004 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:35.004 "is_configured": true, 00:10:35.004 "data_offset": 0, 00:10:35.004 "data_size": 65536 00:10:35.004 }, 00:10:35.004 { 00:10:35.004 "name": "BaseBdev3", 00:10:35.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.004 "is_configured": false, 00:10:35.004 "data_offset": 0, 00:10:35.004 "data_size": 0 00:10:35.004 }, 00:10:35.004 { 00:10:35.004 "name": "BaseBdev4", 00:10:35.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.004 "is_configured": false, 00:10:35.004 "data_offset": 0, 00:10:35.004 "data_size": 0 00:10:35.004 } 00:10:35.004 ] 00:10:35.004 }' 00:10:35.005 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.005 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.264 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.264 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.264 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.264 [2024-11-28 18:51:04.759026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.264 BaseBdev3 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.265 [ 00:10:35.265 { 00:10:35.265 "name": "BaseBdev3", 00:10:35.265 "aliases": [ 00:10:35.265 "e172eaff-0b2b-4b8d-9984-c6fe91ffe400" 00:10:35.265 ], 00:10:35.265 "product_name": "Malloc disk", 00:10:35.265 "block_size": 512, 00:10:35.265 "num_blocks": 65536, 00:10:35.265 "uuid": "e172eaff-0b2b-4b8d-9984-c6fe91ffe400", 00:10:35.265 "assigned_rate_limits": { 00:10:35.265 "rw_ios_per_sec": 0, 00:10:35.265 "rw_mbytes_per_sec": 0, 00:10:35.265 "r_mbytes_per_sec": 0, 00:10:35.265 "w_mbytes_per_sec": 0 00:10:35.265 }, 00:10:35.265 "claimed": true, 00:10:35.265 "claim_type": "exclusive_write", 00:10:35.265 "zoned": false, 00:10:35.265 "supported_io_types": { 00:10:35.265 "read": true, 00:10:35.265 "write": true, 00:10:35.265 "unmap": true, 00:10:35.265 "flush": true, 00:10:35.265 "reset": true, 00:10:35.265 "nvme_admin": false, 00:10:35.265 "nvme_io": false, 00:10:35.265 "nvme_io_md": false, 00:10:35.265 "write_zeroes": true, 00:10:35.265 "zcopy": true, 00:10:35.265 "get_zone_info": false, 00:10:35.265 "zone_management": false, 00:10:35.265 "zone_append": false, 00:10:35.265 "compare": false, 00:10:35.265 "compare_and_write": false, 00:10:35.265 "abort": true, 00:10:35.265 "seek_hole": false, 00:10:35.265 "seek_data": false, 00:10:35.265 "copy": true, 00:10:35.265 "nvme_iov_md": false 00:10:35.265 }, 00:10:35.265 "memory_domains": [ 00:10:35.265 { 00:10:35.265 "dma_device_id": "system", 00:10:35.265 "dma_device_type": 1 00:10:35.265 }, 00:10:35.265 { 00:10:35.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.265 "dma_device_type": 2 00:10:35.265 } 00:10:35.265 ], 00:10:35.265 "driver_specific": {} 00:10:35.265 } 00:10:35.265 ] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.265 "name": "Existed_Raid", 00:10:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.265 "strip_size_kb": 64, 00:10:35.265 "state": "configuring", 00:10:35.265 "raid_level": "concat", 00:10:35.265 "superblock": false, 00:10:35.265 "num_base_bdevs": 4, 00:10:35.265 "num_base_bdevs_discovered": 3, 00:10:35.265 "num_base_bdevs_operational": 4, 00:10:35.265 "base_bdevs_list": [ 00:10:35.265 { 00:10:35.265 "name": "BaseBdev1", 00:10:35.265 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:35.265 "is_configured": true, 00:10:35.265 "data_offset": 0, 00:10:35.265 "data_size": 65536 00:10:35.265 }, 00:10:35.265 { 00:10:35.265 "name": "BaseBdev2", 00:10:35.265 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:35.265 "is_configured": true, 00:10:35.265 "data_offset": 0, 00:10:35.265 "data_size": 65536 00:10:35.265 }, 00:10:35.265 { 00:10:35.265 "name": "BaseBdev3", 00:10:35.265 "uuid": "e172eaff-0b2b-4b8d-9984-c6fe91ffe400", 00:10:35.265 "is_configured": true, 00:10:35.265 "data_offset": 0, 00:10:35.265 "data_size": 65536 00:10:35.265 }, 00:10:35.265 { 00:10:35.265 "name": "BaseBdev4", 00:10:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.265 "is_configured": false, 00:10:35.265 "data_offset": 0, 00:10:35.265 "data_size": 0 00:10:35.265 } 00:10:35.265 ] 00:10:35.265 }' 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.265 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.833 [2024-11-28 18:51:05.238045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.833 [2024-11-28 18:51:05.238150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:35.833 [2024-11-28 18:51:05.238194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:35.833 [2024-11-28 18:51:05.238525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:35.833 [2024-11-28 18:51:05.238711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:35.833 [2024-11-28 18:51:05.238752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:35.833 [2024-11-28 18:51:05.238993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.833 BaseBdev4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.833 [ 00:10:35.833 { 00:10:35.833 "name": "BaseBdev4", 00:10:35.833 "aliases": [ 00:10:35.833 "cbf7f6b1-c98e-4fc8-ae62-ba3e77b62d05" 00:10:35.833 ], 00:10:35.833 "product_name": "Malloc disk", 00:10:35.833 "block_size": 512, 00:10:35.833 "num_blocks": 65536, 00:10:35.833 "uuid": "cbf7f6b1-c98e-4fc8-ae62-ba3e77b62d05", 00:10:35.833 "assigned_rate_limits": { 00:10:35.833 "rw_ios_per_sec": 0, 00:10:35.833 "rw_mbytes_per_sec": 0, 00:10:35.833 "r_mbytes_per_sec": 0, 00:10:35.833 "w_mbytes_per_sec": 0 00:10:35.833 }, 00:10:35.833 "claimed": true, 00:10:35.833 "claim_type": "exclusive_write", 00:10:35.833 "zoned": false, 00:10:35.833 "supported_io_types": { 00:10:35.833 "read": true, 00:10:35.833 "write": true, 00:10:35.833 "unmap": true, 00:10:35.833 "flush": true, 00:10:35.833 "reset": true, 00:10:35.833 "nvme_admin": false, 00:10:35.833 "nvme_io": false, 00:10:35.833 "nvme_io_md": false, 00:10:35.833 "write_zeroes": true, 00:10:35.833 "zcopy": true, 00:10:35.833 "get_zone_info": false, 00:10:35.833 "zone_management": false, 00:10:35.833 "zone_append": false, 00:10:35.833 "compare": false, 00:10:35.833 "compare_and_write": false, 00:10:35.833 "abort": true, 00:10:35.833 "seek_hole": false, 00:10:35.833 "seek_data": false, 00:10:35.833 "copy": true, 00:10:35.833 "nvme_iov_md": false 00:10:35.833 }, 00:10:35.833 "memory_domains": [ 00:10:35.833 { 00:10:35.833 "dma_device_id": "system", 00:10:35.833 "dma_device_type": 1 00:10:35.833 }, 00:10:35.833 { 00:10:35.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.833 "dma_device_type": 2 00:10:35.833 } 00:10:35.833 ], 00:10:35.833 "driver_specific": {} 00:10:35.833 } 00:10:35.833 ] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.833 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.833 "name": "Existed_Raid", 00:10:35.833 "uuid": "0acc5fb5-8142-4973-ad28-d86b38983e2b", 00:10:35.833 "strip_size_kb": 64, 00:10:35.833 "state": "online", 00:10:35.833 "raid_level": "concat", 00:10:35.833 "superblock": false, 00:10:35.834 "num_base_bdevs": 4, 00:10:35.834 "num_base_bdevs_discovered": 4, 00:10:35.834 "num_base_bdevs_operational": 4, 00:10:35.834 "base_bdevs_list": [ 00:10:35.834 { 00:10:35.834 "name": "BaseBdev1", 00:10:35.834 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:35.834 "is_configured": true, 00:10:35.834 "data_offset": 0, 00:10:35.834 "data_size": 65536 00:10:35.834 }, 00:10:35.834 { 00:10:35.834 "name": "BaseBdev2", 00:10:35.834 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:35.834 "is_configured": true, 00:10:35.834 "data_offset": 0, 00:10:35.834 "data_size": 65536 00:10:35.834 }, 00:10:35.834 { 00:10:35.834 "name": "BaseBdev3", 00:10:35.834 "uuid": "e172eaff-0b2b-4b8d-9984-c6fe91ffe400", 00:10:35.834 "is_configured": true, 00:10:35.834 "data_offset": 0, 00:10:35.834 "data_size": 65536 00:10:35.834 }, 00:10:35.834 { 00:10:35.834 "name": "BaseBdev4", 00:10:35.834 "uuid": "cbf7f6b1-c98e-4fc8-ae62-ba3e77b62d05", 00:10:35.834 "is_configured": true, 00:10:35.834 "data_offset": 0, 00:10:35.834 "data_size": 65536 00:10:35.834 } 00:10:35.834 ] 00:10:35.834 }' 00:10:35.834 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.834 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.352 [2024-11-28 18:51:05.702535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.352 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.352 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.352 "name": "Existed_Raid", 00:10:36.352 "aliases": [ 00:10:36.352 "0acc5fb5-8142-4973-ad28-d86b38983e2b" 00:10:36.352 ], 00:10:36.352 "product_name": "Raid Volume", 00:10:36.352 "block_size": 512, 00:10:36.352 "num_blocks": 262144, 00:10:36.352 "uuid": "0acc5fb5-8142-4973-ad28-d86b38983e2b", 00:10:36.352 "assigned_rate_limits": { 00:10:36.352 "rw_ios_per_sec": 0, 00:10:36.352 "rw_mbytes_per_sec": 0, 00:10:36.352 "r_mbytes_per_sec": 0, 00:10:36.352 "w_mbytes_per_sec": 0 00:10:36.352 }, 00:10:36.352 "claimed": false, 00:10:36.352 "zoned": false, 00:10:36.352 "supported_io_types": { 00:10:36.352 "read": true, 00:10:36.352 "write": true, 00:10:36.352 "unmap": true, 00:10:36.352 "flush": true, 00:10:36.352 "reset": true, 00:10:36.352 "nvme_admin": false, 00:10:36.352 "nvme_io": false, 00:10:36.352 "nvme_io_md": false, 00:10:36.352 "write_zeroes": true, 00:10:36.352 "zcopy": false, 00:10:36.352 "get_zone_info": false, 00:10:36.352 "zone_management": false, 00:10:36.352 "zone_append": false, 00:10:36.352 "compare": false, 00:10:36.352 "compare_and_write": false, 00:10:36.352 "abort": false, 00:10:36.352 "seek_hole": false, 00:10:36.352 "seek_data": false, 00:10:36.352 "copy": false, 00:10:36.352 "nvme_iov_md": false 00:10:36.352 }, 00:10:36.352 "memory_domains": [ 00:10:36.352 { 00:10:36.352 "dma_device_id": "system", 00:10:36.352 "dma_device_type": 1 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.352 "dma_device_type": 2 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "system", 00:10:36.352 "dma_device_type": 1 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.352 "dma_device_type": 2 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "system", 00:10:36.352 "dma_device_type": 1 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.352 "dma_device_type": 2 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "system", 00:10:36.352 "dma_device_type": 1 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.352 "dma_device_type": 2 00:10:36.352 } 00:10:36.352 ], 00:10:36.352 "driver_specific": { 00:10:36.352 "raid": { 00:10:36.352 "uuid": "0acc5fb5-8142-4973-ad28-d86b38983e2b", 00:10:36.352 "strip_size_kb": 64, 00:10:36.352 "state": "online", 00:10:36.352 "raid_level": "concat", 00:10:36.352 "superblock": false, 00:10:36.352 "num_base_bdevs": 4, 00:10:36.352 "num_base_bdevs_discovered": 4, 00:10:36.352 "num_base_bdevs_operational": 4, 00:10:36.352 "base_bdevs_list": [ 00:10:36.352 { 00:10:36.352 "name": "BaseBdev1", 00:10:36.352 "uuid": "4cfeabe9-340f-4942-a870-24b5a235a1cb", 00:10:36.352 "is_configured": true, 00:10:36.352 "data_offset": 0, 00:10:36.352 "data_size": 65536 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "name": "BaseBdev2", 00:10:36.352 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:36.352 "is_configured": true, 00:10:36.352 "data_offset": 0, 00:10:36.352 "data_size": 65536 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "name": "BaseBdev3", 00:10:36.352 "uuid": "e172eaff-0b2b-4b8d-9984-c6fe91ffe400", 00:10:36.352 "is_configured": true, 00:10:36.352 "data_offset": 0, 00:10:36.352 "data_size": 65536 00:10:36.352 }, 00:10:36.352 { 00:10:36.352 "name": "BaseBdev4", 00:10:36.352 "uuid": "cbf7f6b1-c98e-4fc8-ae62-ba3e77b62d05", 00:10:36.352 "is_configured": true, 00:10:36.352 "data_offset": 0, 00:10:36.352 "data_size": 65536 00:10:36.352 } 00:10:36.352 ] 00:10:36.352 } 00:10:36.352 } 00:10:36.352 }' 00:10:36.352 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.353 BaseBdev2 00:10:36.353 BaseBdev3 00:10:36.353 BaseBdev4' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 [2024-11-28 18:51:05.970314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.612 [2024-11-28 18:51:05.970338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.612 [2024-11-28 18:51:05.970406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.612 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.612 "name": "Existed_Raid", 00:10:36.612 "uuid": "0acc5fb5-8142-4973-ad28-d86b38983e2b", 00:10:36.612 "strip_size_kb": 64, 00:10:36.612 "state": "offline", 00:10:36.612 "raid_level": "concat", 00:10:36.612 "superblock": false, 00:10:36.612 "num_base_bdevs": 4, 00:10:36.612 "num_base_bdevs_discovered": 3, 00:10:36.612 "num_base_bdevs_operational": 3, 00:10:36.612 "base_bdevs_list": [ 00:10:36.612 { 00:10:36.612 "name": null, 00:10:36.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.612 "is_configured": false, 00:10:36.612 "data_offset": 0, 00:10:36.612 "data_size": 65536 00:10:36.612 }, 00:10:36.612 { 00:10:36.612 "name": "BaseBdev2", 00:10:36.612 "uuid": "1435eaab-848e-4129-8209-cb1a8a4902df", 00:10:36.612 "is_configured": true, 00:10:36.612 "data_offset": 0, 00:10:36.612 "data_size": 65536 00:10:36.612 }, 00:10:36.612 { 00:10:36.612 "name": "BaseBdev3", 00:10:36.612 "uuid": "e172eaff-0b2b-4b8d-9984-c6fe91ffe400", 00:10:36.612 "is_configured": true, 00:10:36.612 "data_offset": 0, 00:10:36.612 "data_size": 65536 00:10:36.612 }, 00:10:36.612 { 00:10:36.612 "name": "BaseBdev4", 00:10:36.612 "uuid": "cbf7f6b1-c98e-4fc8-ae62-ba3e77b62d05", 00:10:36.612 "is_configured": true, 00:10:36.612 "data_offset": 0, 00:10:36.612 "data_size": 65536 00:10:36.612 } 00:10:36.612 ] 00:10:36.612 }' 00:10:36.612 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.612 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.872 [2024-11-28 18:51:06.413754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.872 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 [2024-11-28 18:51:06.485013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 [2024-11-28 18:51:06.556114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.132 [2024-11-28 18:51:06.556169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 BaseBdev2 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 [ 00:10:37.132 { 00:10:37.132 "name": "BaseBdev2", 00:10:37.132 "aliases": [ 00:10:37.132 "7e3f1a9d-4ee0-4f96-b946-207a48aedeef" 00:10:37.132 ], 00:10:37.132 "product_name": "Malloc disk", 00:10:37.132 "block_size": 512, 00:10:37.132 "num_blocks": 65536, 00:10:37.132 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:37.132 "assigned_rate_limits": { 00:10:37.132 "rw_ios_per_sec": 0, 00:10:37.132 "rw_mbytes_per_sec": 0, 00:10:37.132 "r_mbytes_per_sec": 0, 00:10:37.132 "w_mbytes_per_sec": 0 00:10:37.132 }, 00:10:37.132 "claimed": false, 00:10:37.132 "zoned": false, 00:10:37.132 "supported_io_types": { 00:10:37.132 "read": true, 00:10:37.132 "write": true, 00:10:37.132 "unmap": true, 00:10:37.132 "flush": true, 00:10:37.132 "reset": true, 00:10:37.132 "nvme_admin": false, 00:10:37.132 "nvme_io": false, 00:10:37.132 "nvme_io_md": false, 00:10:37.132 "write_zeroes": true, 00:10:37.132 "zcopy": true, 00:10:37.132 "get_zone_info": false, 00:10:37.132 "zone_management": false, 00:10:37.132 "zone_append": false, 00:10:37.132 "compare": false, 00:10:37.132 "compare_and_write": false, 00:10:37.132 "abort": true, 00:10:37.132 "seek_hole": false, 00:10:37.132 "seek_data": false, 00:10:37.132 "copy": true, 00:10:37.132 "nvme_iov_md": false 00:10:37.132 }, 00:10:37.132 "memory_domains": [ 00:10:37.132 { 00:10:37.132 "dma_device_id": "system", 00:10:37.132 "dma_device_type": 1 00:10:37.132 }, 00:10:37.132 { 00:10:37.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.132 "dma_device_type": 2 00:10:37.132 } 00:10:37.132 ], 00:10:37.132 "driver_specific": {} 00:10:37.132 } 00:10:37.132 ] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.132 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.132 [ 00:10:37.132 { 00:10:37.132 "name": "BaseBdev3", 00:10:37.132 "aliases": [ 00:10:37.132 "659aeae2-1395-4508-81b6-b79f2e49226b" 00:10:37.132 ], 00:10:37.132 "product_name": "Malloc disk", 00:10:37.132 "block_size": 512, 00:10:37.132 "num_blocks": 65536, 00:10:37.132 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:37.132 "assigned_rate_limits": { 00:10:37.132 "rw_ios_per_sec": 0, 00:10:37.132 "rw_mbytes_per_sec": 0, 00:10:37.132 "r_mbytes_per_sec": 0, 00:10:37.133 "w_mbytes_per_sec": 0 00:10:37.133 }, 00:10:37.133 "claimed": false, 00:10:37.133 "zoned": false, 00:10:37.133 "supported_io_types": { 00:10:37.133 "read": true, 00:10:37.133 "write": true, 00:10:37.133 "unmap": true, 00:10:37.133 "flush": true, 00:10:37.133 "reset": true, 00:10:37.133 "nvme_admin": false, 00:10:37.133 "nvme_io": false, 00:10:37.133 "nvme_io_md": false, 00:10:37.133 "write_zeroes": true, 00:10:37.133 "zcopy": true, 00:10:37.133 "get_zone_info": false, 00:10:37.133 "zone_management": false, 00:10:37.133 "zone_append": false, 00:10:37.133 "compare": false, 00:10:37.133 "compare_and_write": false, 00:10:37.133 "abort": true, 00:10:37.133 "seek_hole": false, 00:10:37.133 "seek_data": false, 00:10:37.133 "copy": true, 00:10:37.133 "nvme_iov_md": false 00:10:37.133 }, 00:10:37.133 "memory_domains": [ 00:10:37.133 { 00:10:37.133 "dma_device_id": "system", 00:10:37.133 "dma_device_type": 1 00:10:37.133 }, 00:10:37.133 { 00:10:37.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.133 "dma_device_type": 2 00:10:37.133 } 00:10:37.133 ], 00:10:37.133 "driver_specific": {} 00:10:37.133 } 00:10:37.133 ] 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.133 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 BaseBdev4 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 [ 00:10:37.393 { 00:10:37.393 "name": "BaseBdev4", 00:10:37.393 "aliases": [ 00:10:37.393 "4644289d-be4e-4417-8aab-75c947d08242" 00:10:37.393 ], 00:10:37.393 "product_name": "Malloc disk", 00:10:37.393 "block_size": 512, 00:10:37.393 "num_blocks": 65536, 00:10:37.393 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:37.393 "assigned_rate_limits": { 00:10:37.393 "rw_ios_per_sec": 0, 00:10:37.393 "rw_mbytes_per_sec": 0, 00:10:37.393 "r_mbytes_per_sec": 0, 00:10:37.393 "w_mbytes_per_sec": 0 00:10:37.393 }, 00:10:37.393 "claimed": false, 00:10:37.393 "zoned": false, 00:10:37.393 "supported_io_types": { 00:10:37.393 "read": true, 00:10:37.393 "write": true, 00:10:37.393 "unmap": true, 00:10:37.393 "flush": true, 00:10:37.393 "reset": true, 00:10:37.393 "nvme_admin": false, 00:10:37.393 "nvme_io": false, 00:10:37.393 "nvme_io_md": false, 00:10:37.393 "write_zeroes": true, 00:10:37.393 "zcopy": true, 00:10:37.393 "get_zone_info": false, 00:10:37.393 "zone_management": false, 00:10:37.393 "zone_append": false, 00:10:37.393 "compare": false, 00:10:37.393 "compare_and_write": false, 00:10:37.393 "abort": true, 00:10:37.393 "seek_hole": false, 00:10:37.393 "seek_data": false, 00:10:37.393 "copy": true, 00:10:37.393 "nvme_iov_md": false 00:10:37.393 }, 00:10:37.393 "memory_domains": [ 00:10:37.393 { 00:10:37.393 "dma_device_id": "system", 00:10:37.393 "dma_device_type": 1 00:10:37.393 }, 00:10:37.393 { 00:10:37.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.393 "dma_device_type": 2 00:10:37.393 } 00:10:37.393 ], 00:10:37.393 "driver_specific": {} 00:10:37.393 } 00:10:37.393 ] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 [2024-11-28 18:51:06.784376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.393 [2024-11-28 18:51:06.784438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.393 [2024-11-28 18:51:06.784458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.393 [2024-11-28 18:51:06.786229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.393 [2024-11-28 18:51:06.786279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.393 "name": "Existed_Raid", 00:10:37.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.393 "strip_size_kb": 64, 00:10:37.393 "state": "configuring", 00:10:37.393 "raid_level": "concat", 00:10:37.393 "superblock": false, 00:10:37.393 "num_base_bdevs": 4, 00:10:37.393 "num_base_bdevs_discovered": 3, 00:10:37.393 "num_base_bdevs_operational": 4, 00:10:37.393 "base_bdevs_list": [ 00:10:37.393 { 00:10:37.393 "name": "BaseBdev1", 00:10:37.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.393 "is_configured": false, 00:10:37.393 "data_offset": 0, 00:10:37.393 "data_size": 0 00:10:37.393 }, 00:10:37.393 { 00:10:37.393 "name": "BaseBdev2", 00:10:37.393 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:37.393 "is_configured": true, 00:10:37.393 "data_offset": 0, 00:10:37.393 "data_size": 65536 00:10:37.393 }, 00:10:37.393 { 00:10:37.393 "name": "BaseBdev3", 00:10:37.393 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:37.393 "is_configured": true, 00:10:37.393 "data_offset": 0, 00:10:37.393 "data_size": 65536 00:10:37.393 }, 00:10:37.393 { 00:10:37.393 "name": "BaseBdev4", 00:10:37.393 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:37.393 "is_configured": true, 00:10:37.393 "data_offset": 0, 00:10:37.393 "data_size": 65536 00:10:37.393 } 00:10:37.393 ] 00:10:37.393 }' 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.393 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.653 [2024-11-28 18:51:07.204477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.653 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.654 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.913 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.913 "name": "Existed_Raid", 00:10:37.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.913 "strip_size_kb": 64, 00:10:37.913 "state": "configuring", 00:10:37.913 "raid_level": "concat", 00:10:37.913 "superblock": false, 00:10:37.913 "num_base_bdevs": 4, 00:10:37.913 "num_base_bdevs_discovered": 2, 00:10:37.913 "num_base_bdevs_operational": 4, 00:10:37.913 "base_bdevs_list": [ 00:10:37.913 { 00:10:37.913 "name": "BaseBdev1", 00:10:37.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.913 "is_configured": false, 00:10:37.913 "data_offset": 0, 00:10:37.913 "data_size": 0 00:10:37.913 }, 00:10:37.913 { 00:10:37.913 "name": null, 00:10:37.913 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:37.913 "is_configured": false, 00:10:37.913 "data_offset": 0, 00:10:37.913 "data_size": 65536 00:10:37.913 }, 00:10:37.913 { 00:10:37.913 "name": "BaseBdev3", 00:10:37.913 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:37.913 "is_configured": true, 00:10:37.913 "data_offset": 0, 00:10:37.913 "data_size": 65536 00:10:37.913 }, 00:10:37.913 { 00:10:37.913 "name": "BaseBdev4", 00:10:37.913 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:37.913 "is_configured": true, 00:10:37.913 "data_offset": 0, 00:10:37.913 "data_size": 65536 00:10:37.913 } 00:10:37.913 ] 00:10:37.913 }' 00:10:37.913 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.913 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 [2024-11-28 18:51:07.659479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.173 BaseBdev1 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 [ 00:10:38.173 { 00:10:38.173 "name": "BaseBdev1", 00:10:38.173 "aliases": [ 00:10:38.173 "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d" 00:10:38.173 ], 00:10:38.173 "product_name": "Malloc disk", 00:10:38.173 "block_size": 512, 00:10:38.173 "num_blocks": 65536, 00:10:38.173 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:38.173 "assigned_rate_limits": { 00:10:38.173 "rw_ios_per_sec": 0, 00:10:38.173 "rw_mbytes_per_sec": 0, 00:10:38.173 "r_mbytes_per_sec": 0, 00:10:38.173 "w_mbytes_per_sec": 0 00:10:38.173 }, 00:10:38.173 "claimed": true, 00:10:38.173 "claim_type": "exclusive_write", 00:10:38.173 "zoned": false, 00:10:38.173 "supported_io_types": { 00:10:38.173 "read": true, 00:10:38.173 "write": true, 00:10:38.173 "unmap": true, 00:10:38.173 "flush": true, 00:10:38.173 "reset": true, 00:10:38.173 "nvme_admin": false, 00:10:38.173 "nvme_io": false, 00:10:38.173 "nvme_io_md": false, 00:10:38.173 "write_zeroes": true, 00:10:38.173 "zcopy": true, 00:10:38.173 "get_zone_info": false, 00:10:38.173 "zone_management": false, 00:10:38.173 "zone_append": false, 00:10:38.173 "compare": false, 00:10:38.173 "compare_and_write": false, 00:10:38.173 "abort": true, 00:10:38.173 "seek_hole": false, 00:10:38.173 "seek_data": false, 00:10:38.173 "copy": true, 00:10:38.173 "nvme_iov_md": false 00:10:38.173 }, 00:10:38.173 "memory_domains": [ 00:10:38.173 { 00:10:38.173 "dma_device_id": "system", 00:10:38.173 "dma_device_type": 1 00:10:38.173 }, 00:10:38.173 { 00:10:38.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.173 "dma_device_type": 2 00:10:38.173 } 00:10:38.173 ], 00:10:38.173 "driver_specific": {} 00:10:38.173 } 00:10:38.173 ] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.173 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.174 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.174 "name": "Existed_Raid", 00:10:38.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.174 "strip_size_kb": 64, 00:10:38.174 "state": "configuring", 00:10:38.174 "raid_level": "concat", 00:10:38.174 "superblock": false, 00:10:38.174 "num_base_bdevs": 4, 00:10:38.174 "num_base_bdevs_discovered": 3, 00:10:38.174 "num_base_bdevs_operational": 4, 00:10:38.174 "base_bdevs_list": [ 00:10:38.174 { 00:10:38.174 "name": "BaseBdev1", 00:10:38.174 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 0, 00:10:38.174 "data_size": 65536 00:10:38.174 }, 00:10:38.174 { 00:10:38.174 "name": null, 00:10:38.174 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:38.174 "is_configured": false, 00:10:38.174 "data_offset": 0, 00:10:38.174 "data_size": 65536 00:10:38.174 }, 00:10:38.174 { 00:10:38.174 "name": "BaseBdev3", 00:10:38.174 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 0, 00:10:38.174 "data_size": 65536 00:10:38.174 }, 00:10:38.174 { 00:10:38.174 "name": "BaseBdev4", 00:10:38.174 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 0, 00:10:38.174 "data_size": 65536 00:10:38.174 } 00:10:38.174 ] 00:10:38.174 }' 00:10:38.174 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.174 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 [2024-11-28 18:51:08.131712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.743 "name": "Existed_Raid", 00:10:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.743 "strip_size_kb": 64, 00:10:38.743 "state": "configuring", 00:10:38.743 "raid_level": "concat", 00:10:38.743 "superblock": false, 00:10:38.743 "num_base_bdevs": 4, 00:10:38.743 "num_base_bdevs_discovered": 2, 00:10:38.743 "num_base_bdevs_operational": 4, 00:10:38.743 "base_bdevs_list": [ 00:10:38.743 { 00:10:38.743 "name": "BaseBdev1", 00:10:38.743 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:38.743 "is_configured": true, 00:10:38.743 "data_offset": 0, 00:10:38.743 "data_size": 65536 00:10:38.743 }, 00:10:38.743 { 00:10:38.743 "name": null, 00:10:38.743 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:38.743 "is_configured": false, 00:10:38.743 "data_offset": 0, 00:10:38.743 "data_size": 65536 00:10:38.743 }, 00:10:38.743 { 00:10:38.743 "name": null, 00:10:38.743 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:38.743 "is_configured": false, 00:10:38.743 "data_offset": 0, 00:10:38.743 "data_size": 65536 00:10:38.743 }, 00:10:38.743 { 00:10:38.743 "name": "BaseBdev4", 00:10:38.743 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:38.743 "is_configured": true, 00:10:38.743 "data_offset": 0, 00:10:38.743 "data_size": 65536 00:10:38.743 } 00:10:38.743 ] 00:10:38.743 }' 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.743 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 [2024-11-28 18:51:08.563852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.003 "name": "Existed_Raid", 00:10:39.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.003 "strip_size_kb": 64, 00:10:39.003 "state": "configuring", 00:10:39.003 "raid_level": "concat", 00:10:39.003 "superblock": false, 00:10:39.003 "num_base_bdevs": 4, 00:10:39.003 "num_base_bdevs_discovered": 3, 00:10:39.003 "num_base_bdevs_operational": 4, 00:10:39.003 "base_bdevs_list": [ 00:10:39.003 { 00:10:39.003 "name": "BaseBdev1", 00:10:39.003 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:39.003 "is_configured": true, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 65536 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": null, 00:10:39.003 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:39.003 "is_configured": false, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 65536 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": "BaseBdev3", 00:10:39.003 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:39.003 "is_configured": true, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 65536 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": "BaseBdev4", 00:10:39.003 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:39.003 "is_configured": true, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 65536 00:10:39.003 } 00:10:39.003 ] 00:10:39.003 }' 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.003 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.573 [2024-11-28 18:51:08.931971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.573 "name": "Existed_Raid", 00:10:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.573 "strip_size_kb": 64, 00:10:39.573 "state": "configuring", 00:10:39.573 "raid_level": "concat", 00:10:39.573 "superblock": false, 00:10:39.573 "num_base_bdevs": 4, 00:10:39.573 "num_base_bdevs_discovered": 2, 00:10:39.573 "num_base_bdevs_operational": 4, 00:10:39.573 "base_bdevs_list": [ 00:10:39.573 { 00:10:39.573 "name": null, 00:10:39.573 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:39.573 "is_configured": false, 00:10:39.573 "data_offset": 0, 00:10:39.573 "data_size": 65536 00:10:39.573 }, 00:10:39.573 { 00:10:39.573 "name": null, 00:10:39.573 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:39.573 "is_configured": false, 00:10:39.573 "data_offset": 0, 00:10:39.573 "data_size": 65536 00:10:39.573 }, 00:10:39.573 { 00:10:39.573 "name": "BaseBdev3", 00:10:39.573 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:39.573 "is_configured": true, 00:10:39.573 "data_offset": 0, 00:10:39.573 "data_size": 65536 00:10:39.573 }, 00:10:39.573 { 00:10:39.573 "name": "BaseBdev4", 00:10:39.573 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:39.573 "is_configured": true, 00:10:39.573 "data_offset": 0, 00:10:39.573 "data_size": 65536 00:10:39.573 } 00:10:39.573 ] 00:10:39.573 }' 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.573 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.833 [2024-11-28 18:51:09.410463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.833 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.093 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.093 "name": "Existed_Raid", 00:10:40.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.093 "strip_size_kb": 64, 00:10:40.093 "state": "configuring", 00:10:40.093 "raid_level": "concat", 00:10:40.093 "superblock": false, 00:10:40.093 "num_base_bdevs": 4, 00:10:40.093 "num_base_bdevs_discovered": 3, 00:10:40.093 "num_base_bdevs_operational": 4, 00:10:40.093 "base_bdevs_list": [ 00:10:40.093 { 00:10:40.093 "name": null, 00:10:40.093 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:40.093 "is_configured": false, 00:10:40.093 "data_offset": 0, 00:10:40.093 "data_size": 65536 00:10:40.093 }, 00:10:40.093 { 00:10:40.093 "name": "BaseBdev2", 00:10:40.093 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:40.093 "is_configured": true, 00:10:40.093 "data_offset": 0, 00:10:40.093 "data_size": 65536 00:10:40.093 }, 00:10:40.093 { 00:10:40.093 "name": "BaseBdev3", 00:10:40.093 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:40.093 "is_configured": true, 00:10:40.093 "data_offset": 0, 00:10:40.093 "data_size": 65536 00:10:40.093 }, 00:10:40.093 { 00:10:40.093 "name": "BaseBdev4", 00:10:40.093 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:40.093 "is_configured": true, 00:10:40.093 "data_offset": 0, 00:10:40.093 "data_size": 65536 00:10:40.093 } 00:10:40.093 ] 00:10:40.093 }' 00:10:40.093 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.093 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d1b06a5-4683-45cb-88e1-c3a5dd32d25d 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 NewBaseBdev 00:10:40.353 [2024-11-28 18:51:09.865538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.353 [2024-11-28 18:51:09.865580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.353 [2024-11-28 18:51:09.865590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:40.353 [2024-11-28 18:51:09.865824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:40.353 [2024-11-28 18:51:09.865936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.353 [2024-11-28 18:51:09.865944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.353 [2024-11-28 18:51:09.866104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.353 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.353 [ 00:10:40.353 { 00:10:40.353 "name": "NewBaseBdev", 00:10:40.353 "aliases": [ 00:10:40.353 "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d" 00:10:40.353 ], 00:10:40.353 "product_name": "Malloc disk", 00:10:40.353 "block_size": 512, 00:10:40.353 "num_blocks": 65536, 00:10:40.353 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:40.353 "assigned_rate_limits": { 00:10:40.353 "rw_ios_per_sec": 0, 00:10:40.353 "rw_mbytes_per_sec": 0, 00:10:40.353 "r_mbytes_per_sec": 0, 00:10:40.353 "w_mbytes_per_sec": 0 00:10:40.353 }, 00:10:40.353 "claimed": true, 00:10:40.353 "claim_type": "exclusive_write", 00:10:40.353 "zoned": false, 00:10:40.353 "supported_io_types": { 00:10:40.353 "read": true, 00:10:40.353 "write": true, 00:10:40.353 "unmap": true, 00:10:40.353 "flush": true, 00:10:40.353 "reset": true, 00:10:40.353 "nvme_admin": false, 00:10:40.353 "nvme_io": false, 00:10:40.353 "nvme_io_md": false, 00:10:40.353 "write_zeroes": true, 00:10:40.354 "zcopy": true, 00:10:40.354 "get_zone_info": false, 00:10:40.354 "zone_management": false, 00:10:40.354 "zone_append": false, 00:10:40.354 "compare": false, 00:10:40.354 "compare_and_write": false, 00:10:40.354 "abort": true, 00:10:40.354 "seek_hole": false, 00:10:40.354 "seek_data": false, 00:10:40.354 "copy": true, 00:10:40.354 "nvme_iov_md": false 00:10:40.354 }, 00:10:40.354 "memory_domains": [ 00:10:40.354 { 00:10:40.354 "dma_device_id": "system", 00:10:40.354 "dma_device_type": 1 00:10:40.354 }, 00:10:40.354 { 00:10:40.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.354 "dma_device_type": 2 00:10:40.354 } 00:10:40.354 ], 00:10:40.354 "driver_specific": {} 00:10:40.354 } 00:10:40.354 ] 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.354 "name": "Existed_Raid", 00:10:40.354 "uuid": "591228ec-06e5-4a0b-a8e7-87b78abd8f4c", 00:10:40.354 "strip_size_kb": 64, 00:10:40.354 "state": "online", 00:10:40.354 "raid_level": "concat", 00:10:40.354 "superblock": false, 00:10:40.354 "num_base_bdevs": 4, 00:10:40.354 "num_base_bdevs_discovered": 4, 00:10:40.354 "num_base_bdevs_operational": 4, 00:10:40.354 "base_bdevs_list": [ 00:10:40.354 { 00:10:40.354 "name": "NewBaseBdev", 00:10:40.354 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:40.354 "is_configured": true, 00:10:40.354 "data_offset": 0, 00:10:40.354 "data_size": 65536 00:10:40.354 }, 00:10:40.354 { 00:10:40.354 "name": "BaseBdev2", 00:10:40.354 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:40.354 "is_configured": true, 00:10:40.354 "data_offset": 0, 00:10:40.354 "data_size": 65536 00:10:40.354 }, 00:10:40.354 { 00:10:40.354 "name": "BaseBdev3", 00:10:40.354 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:40.354 "is_configured": true, 00:10:40.354 "data_offset": 0, 00:10:40.354 "data_size": 65536 00:10:40.354 }, 00:10:40.354 { 00:10:40.354 "name": "BaseBdev4", 00:10:40.354 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:40.354 "is_configured": true, 00:10:40.354 "data_offset": 0, 00:10:40.354 "data_size": 65536 00:10:40.354 } 00:10:40.354 ] 00:10:40.354 }' 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.354 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.929 [2024-11-28 18:51:10.282007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.929 "name": "Existed_Raid", 00:10:40.929 "aliases": [ 00:10:40.929 "591228ec-06e5-4a0b-a8e7-87b78abd8f4c" 00:10:40.929 ], 00:10:40.929 "product_name": "Raid Volume", 00:10:40.929 "block_size": 512, 00:10:40.929 "num_blocks": 262144, 00:10:40.929 "uuid": "591228ec-06e5-4a0b-a8e7-87b78abd8f4c", 00:10:40.929 "assigned_rate_limits": { 00:10:40.929 "rw_ios_per_sec": 0, 00:10:40.929 "rw_mbytes_per_sec": 0, 00:10:40.929 "r_mbytes_per_sec": 0, 00:10:40.929 "w_mbytes_per_sec": 0 00:10:40.929 }, 00:10:40.929 "claimed": false, 00:10:40.929 "zoned": false, 00:10:40.929 "supported_io_types": { 00:10:40.929 "read": true, 00:10:40.929 "write": true, 00:10:40.929 "unmap": true, 00:10:40.929 "flush": true, 00:10:40.929 "reset": true, 00:10:40.929 "nvme_admin": false, 00:10:40.929 "nvme_io": false, 00:10:40.929 "nvme_io_md": false, 00:10:40.929 "write_zeroes": true, 00:10:40.929 "zcopy": false, 00:10:40.929 "get_zone_info": false, 00:10:40.929 "zone_management": false, 00:10:40.929 "zone_append": false, 00:10:40.929 "compare": false, 00:10:40.929 "compare_and_write": false, 00:10:40.929 "abort": false, 00:10:40.929 "seek_hole": false, 00:10:40.929 "seek_data": false, 00:10:40.929 "copy": false, 00:10:40.929 "nvme_iov_md": false 00:10:40.929 }, 00:10:40.929 "memory_domains": [ 00:10:40.929 { 00:10:40.929 "dma_device_id": "system", 00:10:40.929 "dma_device_type": 1 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.929 "dma_device_type": 2 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "system", 00:10:40.929 "dma_device_type": 1 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.929 "dma_device_type": 2 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "system", 00:10:40.929 "dma_device_type": 1 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.929 "dma_device_type": 2 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "system", 00:10:40.929 "dma_device_type": 1 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.929 "dma_device_type": 2 00:10:40.929 } 00:10:40.929 ], 00:10:40.929 "driver_specific": { 00:10:40.929 "raid": { 00:10:40.929 "uuid": "591228ec-06e5-4a0b-a8e7-87b78abd8f4c", 00:10:40.929 "strip_size_kb": 64, 00:10:40.929 "state": "online", 00:10:40.929 "raid_level": "concat", 00:10:40.929 "superblock": false, 00:10:40.929 "num_base_bdevs": 4, 00:10:40.929 "num_base_bdevs_discovered": 4, 00:10:40.929 "num_base_bdevs_operational": 4, 00:10:40.929 "base_bdevs_list": [ 00:10:40.929 { 00:10:40.929 "name": "NewBaseBdev", 00:10:40.929 "uuid": "2d1b06a5-4683-45cb-88e1-c3a5dd32d25d", 00:10:40.929 "is_configured": true, 00:10:40.929 "data_offset": 0, 00:10:40.929 "data_size": 65536 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "name": "BaseBdev2", 00:10:40.929 "uuid": "7e3f1a9d-4ee0-4f96-b946-207a48aedeef", 00:10:40.929 "is_configured": true, 00:10:40.929 "data_offset": 0, 00:10:40.929 "data_size": 65536 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "name": "BaseBdev3", 00:10:40.929 "uuid": "659aeae2-1395-4508-81b6-b79f2e49226b", 00:10:40.929 "is_configured": true, 00:10:40.929 "data_offset": 0, 00:10:40.929 "data_size": 65536 00:10:40.929 }, 00:10:40.929 { 00:10:40.929 "name": "BaseBdev4", 00:10:40.929 "uuid": "4644289d-be4e-4417-8aab-75c947d08242", 00:10:40.929 "is_configured": true, 00:10:40.929 "data_offset": 0, 00:10:40.929 "data_size": 65536 00:10:40.929 } 00:10:40.929 ] 00:10:40.929 } 00:10:40.929 } 00:10:40.929 }' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:40.929 BaseBdev2 00:10:40.929 BaseBdev3 00:10:40.929 BaseBdev4' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.929 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.930 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.206 [2024-11-28 18:51:10.605785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.206 [2024-11-28 18:51:10.605852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.206 [2024-11-28 18:51:10.605929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.206 [2024-11-28 18:51:10.605992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.206 [2024-11-28 18:51:10.606008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83679 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83679 ']' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83679 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83679 00:10:41.206 killing process with pid 83679 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83679' 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83679 00:10:41.206 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83679 00:10:41.206 [2024-11-28 18:51:10.647726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.206 [2024-11-28 18:51:10.686532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.466 ************************************ 00:10:41.466 END TEST raid_state_function_test 00:10:41.466 ************************************ 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:41.466 00:10:41.466 real 0m8.942s 00:10:41.466 user 0m15.254s 00:10:41.466 sys 0m1.757s 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.466 18:51:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:41.466 18:51:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.466 18:51:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.466 18:51:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.466 ************************************ 00:10:41.466 START TEST raid_state_function_test_sb 00:10:41.466 ************************************ 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.466 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:41.467 Process raid pid: 84328 00:10:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84328 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84328' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84328 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84328 ']' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.467 18:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:41.467 [2024-11-28 18:51:11.062650] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:41.467 [2024-11-28 18:51:11.062781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.726 [2024-11-28 18:51:11.197883] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.726 [2024-11-28 18:51:11.234355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.726 [2024-11-28 18:51:11.259129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.726 [2024-11-28 18:51:11.300808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.726 [2024-11-28 18:51:11.300928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.296 [2024-11-28 18:51:11.884306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.296 [2024-11-28 18:51:11.884405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.296 [2024-11-28 18:51:11.884421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.296 [2024-11-28 18:51:11.884443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.296 [2024-11-28 18:51:11.884454] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.296 [2024-11-28 18:51:11.884461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.296 [2024-11-28 18:51:11.884468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.296 [2024-11-28 18:51:11.884474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.296 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.556 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.556 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.556 "name": "Existed_Raid", 00:10:42.556 "uuid": "35dab177-e519-4fc8-a053-7753b73934dd", 00:10:42.556 "strip_size_kb": 64, 00:10:42.556 "state": "configuring", 00:10:42.556 "raid_level": "concat", 00:10:42.556 "superblock": true, 00:10:42.556 "num_base_bdevs": 4, 00:10:42.556 "num_base_bdevs_discovered": 0, 00:10:42.556 "num_base_bdevs_operational": 4, 00:10:42.556 "base_bdevs_list": [ 00:10:42.556 { 00:10:42.556 "name": "BaseBdev1", 00:10:42.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.556 "is_configured": false, 00:10:42.556 "data_offset": 0, 00:10:42.556 "data_size": 0 00:10:42.556 }, 00:10:42.556 { 00:10:42.556 "name": "BaseBdev2", 00:10:42.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.556 "is_configured": false, 00:10:42.556 "data_offset": 0, 00:10:42.556 "data_size": 0 00:10:42.556 }, 00:10:42.556 { 00:10:42.556 "name": "BaseBdev3", 00:10:42.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.556 "is_configured": false, 00:10:42.556 "data_offset": 0, 00:10:42.556 "data_size": 0 00:10:42.556 }, 00:10:42.556 { 00:10:42.556 "name": "BaseBdev4", 00:10:42.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.556 "is_configured": false, 00:10:42.556 "data_offset": 0, 00:10:42.556 "data_size": 0 00:10:42.556 } 00:10:42.556 ] 00:10:42.556 }' 00:10:42.556 18:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.556 18:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 [2024-11-28 18:51:12.248306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.816 [2024-11-28 18:51:12.248402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 [2024-11-28 18:51:12.256357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.816 [2024-11-28 18:51:12.256397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.816 [2024-11-28 18:51:12.256407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.816 [2024-11-28 18:51:12.256414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.816 [2024-11-28 18:51:12.256422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.816 [2024-11-28 18:51:12.256438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.816 [2024-11-28 18:51:12.256445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.816 [2024-11-28 18:51:12.256452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 [2024-11-28 18:51:12.273118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.816 BaseBdev1 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.816 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.816 [ 00:10:42.816 { 00:10:42.816 "name": "BaseBdev1", 00:10:42.816 "aliases": [ 00:10:42.816 "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe" 00:10:42.816 ], 00:10:42.816 "product_name": "Malloc disk", 00:10:42.816 "block_size": 512, 00:10:42.816 "num_blocks": 65536, 00:10:42.816 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:42.816 "assigned_rate_limits": { 00:10:42.816 "rw_ios_per_sec": 0, 00:10:42.816 "rw_mbytes_per_sec": 0, 00:10:42.816 "r_mbytes_per_sec": 0, 00:10:42.816 "w_mbytes_per_sec": 0 00:10:42.816 }, 00:10:42.816 "claimed": true, 00:10:42.816 "claim_type": "exclusive_write", 00:10:42.817 "zoned": false, 00:10:42.817 "supported_io_types": { 00:10:42.817 "read": true, 00:10:42.817 "write": true, 00:10:42.817 "unmap": true, 00:10:42.817 "flush": true, 00:10:42.817 "reset": true, 00:10:42.817 "nvme_admin": false, 00:10:42.817 "nvme_io": false, 00:10:42.817 "nvme_io_md": false, 00:10:42.817 "write_zeroes": true, 00:10:42.817 "zcopy": true, 00:10:42.817 "get_zone_info": false, 00:10:42.817 "zone_management": false, 00:10:42.817 "zone_append": false, 00:10:42.817 "compare": false, 00:10:42.817 "compare_and_write": false, 00:10:42.817 "abort": true, 00:10:42.817 "seek_hole": false, 00:10:42.817 "seek_data": false, 00:10:42.817 "copy": true, 00:10:42.817 "nvme_iov_md": false 00:10:42.817 }, 00:10:42.817 "memory_domains": [ 00:10:42.817 { 00:10:42.817 "dma_device_id": "system", 00:10:42.817 "dma_device_type": 1 00:10:42.817 }, 00:10:42.817 { 00:10:42.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.817 "dma_device_type": 2 00:10:42.817 } 00:10:42.817 ], 00:10:42.817 "driver_specific": {} 00:10:42.817 } 00:10:42.817 ] 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.817 "name": "Existed_Raid", 00:10:42.817 "uuid": "cc22fd59-de30-405d-94fa-2f44c12fe629", 00:10:42.817 "strip_size_kb": 64, 00:10:42.817 "state": "configuring", 00:10:42.817 "raid_level": "concat", 00:10:42.817 "superblock": true, 00:10:42.817 "num_base_bdevs": 4, 00:10:42.817 "num_base_bdevs_discovered": 1, 00:10:42.817 "num_base_bdevs_operational": 4, 00:10:42.817 "base_bdevs_list": [ 00:10:42.817 { 00:10:42.817 "name": "BaseBdev1", 00:10:42.817 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:42.817 "is_configured": true, 00:10:42.817 "data_offset": 2048, 00:10:42.817 "data_size": 63488 00:10:42.817 }, 00:10:42.817 { 00:10:42.817 "name": "BaseBdev2", 00:10:42.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.817 "is_configured": false, 00:10:42.817 "data_offset": 0, 00:10:42.817 "data_size": 0 00:10:42.817 }, 00:10:42.817 { 00:10:42.817 "name": "BaseBdev3", 00:10:42.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.817 "is_configured": false, 00:10:42.817 "data_offset": 0, 00:10:42.817 "data_size": 0 00:10:42.817 }, 00:10:42.817 { 00:10:42.817 "name": "BaseBdev4", 00:10:42.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.817 "is_configured": false, 00:10:42.817 "data_offset": 0, 00:10:42.817 "data_size": 0 00:10:42.817 } 00:10:42.817 ] 00:10:42.817 }' 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.817 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.415 [2024-11-28 18:51:12.709260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.415 [2024-11-28 18:51:12.709352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.415 [2024-11-28 18:51:12.717315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.415 [2024-11-28 18:51:12.719134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.415 [2024-11-28 18:51:12.719204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.415 [2024-11-28 18:51:12.719231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.415 [2024-11-28 18:51:12.719251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.415 [2024-11-28 18:51:12.719270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.415 [2024-11-28 18:51:12.719304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.415 "name": "Existed_Raid", 00:10:43.415 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:43.415 "strip_size_kb": 64, 00:10:43.415 "state": "configuring", 00:10:43.415 "raid_level": "concat", 00:10:43.415 "superblock": true, 00:10:43.415 "num_base_bdevs": 4, 00:10:43.415 "num_base_bdevs_discovered": 1, 00:10:43.415 "num_base_bdevs_operational": 4, 00:10:43.415 "base_bdevs_list": [ 00:10:43.415 { 00:10:43.415 "name": "BaseBdev1", 00:10:43.415 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:43.415 "is_configured": true, 00:10:43.415 "data_offset": 2048, 00:10:43.415 "data_size": 63488 00:10:43.415 }, 00:10:43.415 { 00:10:43.415 "name": "BaseBdev2", 00:10:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.415 "is_configured": false, 00:10:43.415 "data_offset": 0, 00:10:43.415 "data_size": 0 00:10:43.415 }, 00:10:43.415 { 00:10:43.415 "name": "BaseBdev3", 00:10:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.415 "is_configured": false, 00:10:43.415 "data_offset": 0, 00:10:43.415 "data_size": 0 00:10:43.415 }, 00:10:43.415 { 00:10:43.415 "name": "BaseBdev4", 00:10:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.415 "is_configured": false, 00:10:43.415 "data_offset": 0, 00:10:43.415 "data_size": 0 00:10:43.415 } 00:10:43.415 ] 00:10:43.415 }' 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.415 18:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 [2024-11-28 18:51:13.092542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.675 BaseBdev2 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.675 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.675 [ 00:10:43.675 { 00:10:43.675 "name": "BaseBdev2", 00:10:43.675 "aliases": [ 00:10:43.675 "84fdf60f-bb9a-4912-b439-922a015c920f" 00:10:43.675 ], 00:10:43.675 "product_name": "Malloc disk", 00:10:43.675 "block_size": 512, 00:10:43.675 "num_blocks": 65536, 00:10:43.675 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:43.675 "assigned_rate_limits": { 00:10:43.675 "rw_ios_per_sec": 0, 00:10:43.675 "rw_mbytes_per_sec": 0, 00:10:43.675 "r_mbytes_per_sec": 0, 00:10:43.675 "w_mbytes_per_sec": 0 00:10:43.675 }, 00:10:43.675 "claimed": true, 00:10:43.675 "claim_type": "exclusive_write", 00:10:43.676 "zoned": false, 00:10:43.676 "supported_io_types": { 00:10:43.676 "read": true, 00:10:43.676 "write": true, 00:10:43.676 "unmap": true, 00:10:43.676 "flush": true, 00:10:43.676 "reset": true, 00:10:43.676 "nvme_admin": false, 00:10:43.676 "nvme_io": false, 00:10:43.676 "nvme_io_md": false, 00:10:43.676 "write_zeroes": true, 00:10:43.676 "zcopy": true, 00:10:43.676 "get_zone_info": false, 00:10:43.676 "zone_management": false, 00:10:43.676 "zone_append": false, 00:10:43.676 "compare": false, 00:10:43.676 "compare_and_write": false, 00:10:43.676 "abort": true, 00:10:43.676 "seek_hole": false, 00:10:43.676 "seek_data": false, 00:10:43.676 "copy": true, 00:10:43.676 "nvme_iov_md": false 00:10:43.676 }, 00:10:43.676 "memory_domains": [ 00:10:43.676 { 00:10:43.676 "dma_device_id": "system", 00:10:43.676 "dma_device_type": 1 00:10:43.676 }, 00:10:43.676 { 00:10:43.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.676 "dma_device_type": 2 00:10:43.676 } 00:10:43.676 ], 00:10:43.676 "driver_specific": {} 00:10:43.676 } 00:10:43.676 ] 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.676 "name": "Existed_Raid", 00:10:43.676 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:43.676 "strip_size_kb": 64, 00:10:43.676 "state": "configuring", 00:10:43.676 "raid_level": "concat", 00:10:43.676 "superblock": true, 00:10:43.676 "num_base_bdevs": 4, 00:10:43.676 "num_base_bdevs_discovered": 2, 00:10:43.676 "num_base_bdevs_operational": 4, 00:10:43.676 "base_bdevs_list": [ 00:10:43.676 { 00:10:43.676 "name": "BaseBdev1", 00:10:43.676 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:43.676 "is_configured": true, 00:10:43.676 "data_offset": 2048, 00:10:43.676 "data_size": 63488 00:10:43.676 }, 00:10:43.676 { 00:10:43.676 "name": "BaseBdev2", 00:10:43.676 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:43.676 "is_configured": true, 00:10:43.676 "data_offset": 2048, 00:10:43.676 "data_size": 63488 00:10:43.676 }, 00:10:43.676 { 00:10:43.676 "name": "BaseBdev3", 00:10:43.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.676 "is_configured": false, 00:10:43.676 "data_offset": 0, 00:10:43.676 "data_size": 0 00:10:43.676 }, 00:10:43.676 { 00:10:43.676 "name": "BaseBdev4", 00:10:43.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.676 "is_configured": false, 00:10:43.676 "data_offset": 0, 00:10:43.676 "data_size": 0 00:10:43.676 } 00:10:43.676 ] 00:10:43.676 }' 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.676 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.244 [2024-11-28 18:51:13.564372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.244 BaseBdev3 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.244 [ 00:10:44.244 { 00:10:44.244 "name": "BaseBdev3", 00:10:44.244 "aliases": [ 00:10:44.244 "5b39ed82-6faa-4f3a-88cc-27faf0f3a475" 00:10:44.244 ], 00:10:44.244 "product_name": "Malloc disk", 00:10:44.244 "block_size": 512, 00:10:44.244 "num_blocks": 65536, 00:10:44.244 "uuid": "5b39ed82-6faa-4f3a-88cc-27faf0f3a475", 00:10:44.244 "assigned_rate_limits": { 00:10:44.244 "rw_ios_per_sec": 0, 00:10:44.244 "rw_mbytes_per_sec": 0, 00:10:44.244 "r_mbytes_per_sec": 0, 00:10:44.244 "w_mbytes_per_sec": 0 00:10:44.244 }, 00:10:44.244 "claimed": true, 00:10:44.244 "claim_type": "exclusive_write", 00:10:44.244 "zoned": false, 00:10:44.244 "supported_io_types": { 00:10:44.244 "read": true, 00:10:44.244 "write": true, 00:10:44.244 "unmap": true, 00:10:44.244 "flush": true, 00:10:44.244 "reset": true, 00:10:44.244 "nvme_admin": false, 00:10:44.244 "nvme_io": false, 00:10:44.244 "nvme_io_md": false, 00:10:44.244 "write_zeroes": true, 00:10:44.244 "zcopy": true, 00:10:44.244 "get_zone_info": false, 00:10:44.244 "zone_management": false, 00:10:44.244 "zone_append": false, 00:10:44.244 "compare": false, 00:10:44.244 "compare_and_write": false, 00:10:44.244 "abort": true, 00:10:44.244 "seek_hole": false, 00:10:44.244 "seek_data": false, 00:10:44.244 "copy": true, 00:10:44.244 "nvme_iov_md": false 00:10:44.244 }, 00:10:44.244 "memory_domains": [ 00:10:44.244 { 00:10:44.244 "dma_device_id": "system", 00:10:44.244 "dma_device_type": 1 00:10:44.244 }, 00:10:44.244 { 00:10:44.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.244 "dma_device_type": 2 00:10:44.244 } 00:10:44.244 ], 00:10:44.244 "driver_specific": {} 00:10:44.244 } 00:10:44.244 ] 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.244 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.245 "name": "Existed_Raid", 00:10:44.245 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:44.245 "strip_size_kb": 64, 00:10:44.245 "state": "configuring", 00:10:44.245 "raid_level": "concat", 00:10:44.245 "superblock": true, 00:10:44.245 "num_base_bdevs": 4, 00:10:44.245 "num_base_bdevs_discovered": 3, 00:10:44.245 "num_base_bdevs_operational": 4, 00:10:44.245 "base_bdevs_list": [ 00:10:44.245 { 00:10:44.245 "name": "BaseBdev1", 00:10:44.245 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:44.245 "is_configured": true, 00:10:44.245 "data_offset": 2048, 00:10:44.245 "data_size": 63488 00:10:44.245 }, 00:10:44.245 { 00:10:44.245 "name": "BaseBdev2", 00:10:44.245 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:44.245 "is_configured": true, 00:10:44.245 "data_offset": 2048, 00:10:44.245 "data_size": 63488 00:10:44.245 }, 00:10:44.245 { 00:10:44.245 "name": "BaseBdev3", 00:10:44.245 "uuid": "5b39ed82-6faa-4f3a-88cc-27faf0f3a475", 00:10:44.245 "is_configured": true, 00:10:44.245 "data_offset": 2048, 00:10:44.245 "data_size": 63488 00:10:44.245 }, 00:10:44.245 { 00:10:44.245 "name": "BaseBdev4", 00:10:44.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.245 "is_configured": false, 00:10:44.245 "data_offset": 0, 00:10:44.245 "data_size": 0 00:10:44.245 } 00:10:44.245 ] 00:10:44.245 }' 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.245 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 18:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.505 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.505 18:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 [2024-11-28 18:51:14.003467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.505 [2024-11-28 18:51:14.003758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:44.505 [2024-11-28 18:51:14.003819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.505 BaseBdev4 00:10:44.505 [2024-11-28 18:51:14.004131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:44.505 [2024-11-28 18:51:14.004318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:44.505 [2024-11-28 18:51:14.004331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.505 [2024-11-28 18:51:14.004467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 [ 00:10:44.505 { 00:10:44.505 "name": "BaseBdev4", 00:10:44.505 "aliases": [ 00:10:44.505 "378f3322-e82a-48d5-89bb-37498713ba2c" 00:10:44.505 ], 00:10:44.505 "product_name": "Malloc disk", 00:10:44.505 "block_size": 512, 00:10:44.505 "num_blocks": 65536, 00:10:44.505 "uuid": "378f3322-e82a-48d5-89bb-37498713ba2c", 00:10:44.505 "assigned_rate_limits": { 00:10:44.505 "rw_ios_per_sec": 0, 00:10:44.505 "rw_mbytes_per_sec": 0, 00:10:44.505 "r_mbytes_per_sec": 0, 00:10:44.505 "w_mbytes_per_sec": 0 00:10:44.505 }, 00:10:44.505 "claimed": true, 00:10:44.505 "claim_type": "exclusive_write", 00:10:44.505 "zoned": false, 00:10:44.505 "supported_io_types": { 00:10:44.505 "read": true, 00:10:44.505 "write": true, 00:10:44.505 "unmap": true, 00:10:44.505 "flush": true, 00:10:44.505 "reset": true, 00:10:44.505 "nvme_admin": false, 00:10:44.505 "nvme_io": false, 00:10:44.505 "nvme_io_md": false, 00:10:44.505 "write_zeroes": true, 00:10:44.505 "zcopy": true, 00:10:44.505 "get_zone_info": false, 00:10:44.505 "zone_management": false, 00:10:44.505 "zone_append": false, 00:10:44.505 "compare": false, 00:10:44.505 "compare_and_write": false, 00:10:44.505 "abort": true, 00:10:44.505 "seek_hole": false, 00:10:44.505 "seek_data": false, 00:10:44.505 "copy": true, 00:10:44.505 "nvme_iov_md": false 00:10:44.505 }, 00:10:44.505 "memory_domains": [ 00:10:44.505 { 00:10:44.505 "dma_device_id": "system", 00:10:44.505 "dma_device_type": 1 00:10:44.505 }, 00:10:44.505 { 00:10:44.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.505 "dma_device_type": 2 00:10:44.505 } 00:10:44.505 ], 00:10:44.505 "driver_specific": {} 00:10:44.505 } 00:10:44.505 ] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.505 "name": "Existed_Raid", 00:10:44.505 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:44.505 "strip_size_kb": 64, 00:10:44.505 "state": "online", 00:10:44.505 "raid_level": "concat", 00:10:44.505 "superblock": true, 00:10:44.505 "num_base_bdevs": 4, 00:10:44.505 "num_base_bdevs_discovered": 4, 00:10:44.505 "num_base_bdevs_operational": 4, 00:10:44.505 "base_bdevs_list": [ 00:10:44.505 { 00:10:44.505 "name": "BaseBdev1", 00:10:44.505 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:44.505 "is_configured": true, 00:10:44.505 "data_offset": 2048, 00:10:44.505 "data_size": 63488 00:10:44.505 }, 00:10:44.505 { 00:10:44.505 "name": "BaseBdev2", 00:10:44.505 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:44.505 "is_configured": true, 00:10:44.505 "data_offset": 2048, 00:10:44.505 "data_size": 63488 00:10:44.505 }, 00:10:44.505 { 00:10:44.505 "name": "BaseBdev3", 00:10:44.505 "uuid": "5b39ed82-6faa-4f3a-88cc-27faf0f3a475", 00:10:44.505 "is_configured": true, 00:10:44.505 "data_offset": 2048, 00:10:44.505 "data_size": 63488 00:10:44.505 }, 00:10:44.505 { 00:10:44.505 "name": "BaseBdev4", 00:10:44.505 "uuid": "378f3322-e82a-48d5-89bb-37498713ba2c", 00:10:44.505 "is_configured": true, 00:10:44.505 "data_offset": 2048, 00:10:44.505 "data_size": 63488 00:10:44.505 } 00:10:44.505 ] 00:10:44.505 }' 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.505 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.075 [2024-11-28 18:51:14.407933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.075 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.075 "name": "Existed_Raid", 00:10:45.075 "aliases": [ 00:10:45.075 "eda3e855-316b-4a88-8c35-6be644e39fe9" 00:10:45.075 ], 00:10:45.075 "product_name": "Raid Volume", 00:10:45.075 "block_size": 512, 00:10:45.075 "num_blocks": 253952, 00:10:45.075 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:45.075 "assigned_rate_limits": { 00:10:45.075 "rw_ios_per_sec": 0, 00:10:45.075 "rw_mbytes_per_sec": 0, 00:10:45.075 "r_mbytes_per_sec": 0, 00:10:45.075 "w_mbytes_per_sec": 0 00:10:45.075 }, 00:10:45.075 "claimed": false, 00:10:45.075 "zoned": false, 00:10:45.075 "supported_io_types": { 00:10:45.075 "read": true, 00:10:45.075 "write": true, 00:10:45.075 "unmap": true, 00:10:45.075 "flush": true, 00:10:45.075 "reset": true, 00:10:45.075 "nvme_admin": false, 00:10:45.075 "nvme_io": false, 00:10:45.075 "nvme_io_md": false, 00:10:45.075 "write_zeroes": true, 00:10:45.075 "zcopy": false, 00:10:45.075 "get_zone_info": false, 00:10:45.075 "zone_management": false, 00:10:45.075 "zone_append": false, 00:10:45.075 "compare": false, 00:10:45.075 "compare_and_write": false, 00:10:45.075 "abort": false, 00:10:45.075 "seek_hole": false, 00:10:45.075 "seek_data": false, 00:10:45.075 "copy": false, 00:10:45.075 "nvme_iov_md": false 00:10:45.075 }, 00:10:45.075 "memory_domains": [ 00:10:45.075 { 00:10:45.075 "dma_device_id": "system", 00:10:45.075 "dma_device_type": 1 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.075 "dma_device_type": 2 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "system", 00:10:45.075 "dma_device_type": 1 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.075 "dma_device_type": 2 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "system", 00:10:45.075 "dma_device_type": 1 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.075 "dma_device_type": 2 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "system", 00:10:45.075 "dma_device_type": 1 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.075 "dma_device_type": 2 00:10:45.075 } 00:10:45.075 ], 00:10:45.075 "driver_specific": { 00:10:45.075 "raid": { 00:10:45.075 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:45.075 "strip_size_kb": 64, 00:10:45.075 "state": "online", 00:10:45.075 "raid_level": "concat", 00:10:45.075 "superblock": true, 00:10:45.075 "num_base_bdevs": 4, 00:10:45.075 "num_base_bdevs_discovered": 4, 00:10:45.075 "num_base_bdevs_operational": 4, 00:10:45.075 "base_bdevs_list": [ 00:10:45.075 { 00:10:45.075 "name": "BaseBdev1", 00:10:45.075 "uuid": "ca28f8ee-d8a8-4a84-aa41-10bf7fa0cebe", 00:10:45.075 "is_configured": true, 00:10:45.075 "data_offset": 2048, 00:10:45.075 "data_size": 63488 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "name": "BaseBdev2", 00:10:45.075 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:45.075 "is_configured": true, 00:10:45.075 "data_offset": 2048, 00:10:45.075 "data_size": 63488 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "name": "BaseBdev3", 00:10:45.075 "uuid": "5b39ed82-6faa-4f3a-88cc-27faf0f3a475", 00:10:45.075 "is_configured": true, 00:10:45.075 "data_offset": 2048, 00:10:45.075 "data_size": 63488 00:10:45.075 }, 00:10:45.075 { 00:10:45.075 "name": "BaseBdev4", 00:10:45.076 "uuid": "378f3322-e82a-48d5-89bb-37498713ba2c", 00:10:45.076 "is_configured": true, 00:10:45.076 "data_offset": 2048, 00:10:45.076 "data_size": 63488 00:10:45.076 } 00:10:45.076 ] 00:10:45.076 } 00:10:45.076 } 00:10:45.076 }' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.076 BaseBdev2 00:10:45.076 BaseBdev3 00:10:45.076 BaseBdev4' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.076 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.335 [2024-11-28 18:51:14.699747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.335 [2024-11-28 18:51:14.699772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.335 [2024-11-28 18:51:14.699837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.335 "name": "Existed_Raid", 00:10:45.335 "uuid": "eda3e855-316b-4a88-8c35-6be644e39fe9", 00:10:45.335 "strip_size_kb": 64, 00:10:45.335 "state": "offline", 00:10:45.335 "raid_level": "concat", 00:10:45.335 "superblock": true, 00:10:45.335 "num_base_bdevs": 4, 00:10:45.335 "num_base_bdevs_discovered": 3, 00:10:45.335 "num_base_bdevs_operational": 3, 00:10:45.335 "base_bdevs_list": [ 00:10:45.335 { 00:10:45.335 "name": null, 00:10:45.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.335 "is_configured": false, 00:10:45.335 "data_offset": 0, 00:10:45.335 "data_size": 63488 00:10:45.335 }, 00:10:45.335 { 00:10:45.335 "name": "BaseBdev2", 00:10:45.335 "uuid": "84fdf60f-bb9a-4912-b439-922a015c920f", 00:10:45.335 "is_configured": true, 00:10:45.335 "data_offset": 2048, 00:10:45.335 "data_size": 63488 00:10:45.335 }, 00:10:45.335 { 00:10:45.335 "name": "BaseBdev3", 00:10:45.335 "uuid": "5b39ed82-6faa-4f3a-88cc-27faf0f3a475", 00:10:45.335 "is_configured": true, 00:10:45.335 "data_offset": 2048, 00:10:45.335 "data_size": 63488 00:10:45.335 }, 00:10:45.335 { 00:10:45.335 "name": "BaseBdev4", 00:10:45.335 "uuid": "378f3322-e82a-48d5-89bb-37498713ba2c", 00:10:45.335 "is_configured": true, 00:10:45.335 "data_offset": 2048, 00:10:45.335 "data_size": 63488 00:10:45.335 } 00:10:45.335 ] 00:10:45.335 }' 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.335 18:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.595 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.595 [2024-11-28 18:51:15.190963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 [2024-11-28 18:51:15.258106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 [2024-11-28 18:51:15.324983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:45.858 [2024-11-28 18:51:15.325081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 BaseBdev2 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.858 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 [ 00:10:45.858 { 00:10:45.858 "name": "BaseBdev2", 00:10:45.858 "aliases": [ 00:10:45.858 "27371409-54dc-475f-a549-241bf4327a61" 00:10:45.858 ], 00:10:45.858 "product_name": "Malloc disk", 00:10:45.858 "block_size": 512, 00:10:45.858 "num_blocks": 65536, 00:10:45.858 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:45.858 "assigned_rate_limits": { 00:10:45.858 "rw_ios_per_sec": 0, 00:10:45.858 "rw_mbytes_per_sec": 0, 00:10:45.858 "r_mbytes_per_sec": 0, 00:10:45.858 "w_mbytes_per_sec": 0 00:10:45.858 }, 00:10:45.858 "claimed": false, 00:10:45.858 "zoned": false, 00:10:45.858 "supported_io_types": { 00:10:45.858 "read": true, 00:10:45.858 "write": true, 00:10:45.858 "unmap": true, 00:10:45.858 "flush": true, 00:10:45.858 "reset": true, 00:10:45.858 "nvme_admin": false, 00:10:45.858 "nvme_io": false, 00:10:45.858 "nvme_io_md": false, 00:10:45.858 "write_zeroes": true, 00:10:45.858 "zcopy": true, 00:10:45.858 "get_zone_info": false, 00:10:45.858 "zone_management": false, 00:10:45.858 "zone_append": false, 00:10:45.858 "compare": false, 00:10:45.858 "compare_and_write": false, 00:10:45.858 "abort": true, 00:10:45.858 "seek_hole": false, 00:10:45.858 "seek_data": false, 00:10:45.858 "copy": true, 00:10:45.858 "nvme_iov_md": false 00:10:45.858 }, 00:10:45.858 "memory_domains": [ 00:10:45.858 { 00:10:45.858 "dma_device_id": "system", 00:10:45.858 "dma_device_type": 1 00:10:45.858 }, 00:10:45.858 { 00:10:45.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.858 "dma_device_type": 2 00:10:45.858 } 00:10:45.858 ], 00:10:45.859 "driver_specific": {} 00:10:45.859 } 00:10:45.859 ] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.859 BaseBdev3 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.859 [ 00:10:45.859 { 00:10:45.859 "name": "BaseBdev3", 00:10:45.859 "aliases": [ 00:10:45.859 "c6e40783-0ee7-44b2-8e3d-e27423e97db6" 00:10:45.859 ], 00:10:45.859 "product_name": "Malloc disk", 00:10:45.859 "block_size": 512, 00:10:45.859 "num_blocks": 65536, 00:10:45.859 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:45.859 "assigned_rate_limits": { 00:10:45.859 "rw_ios_per_sec": 0, 00:10:45.859 "rw_mbytes_per_sec": 0, 00:10:45.859 "r_mbytes_per_sec": 0, 00:10:45.859 "w_mbytes_per_sec": 0 00:10:45.859 }, 00:10:45.859 "claimed": false, 00:10:45.859 "zoned": false, 00:10:45.859 "supported_io_types": { 00:10:45.859 "read": true, 00:10:45.859 "write": true, 00:10:45.859 "unmap": true, 00:10:45.859 "flush": true, 00:10:45.859 "reset": true, 00:10:45.859 "nvme_admin": false, 00:10:45.859 "nvme_io": false, 00:10:45.859 "nvme_io_md": false, 00:10:45.859 "write_zeroes": true, 00:10:45.859 "zcopy": true, 00:10:45.859 "get_zone_info": false, 00:10:45.859 "zone_management": false, 00:10:45.859 "zone_append": false, 00:10:45.859 "compare": false, 00:10:45.859 "compare_and_write": false, 00:10:45.859 "abort": true, 00:10:45.859 "seek_hole": false, 00:10:45.859 "seek_data": false, 00:10:45.859 "copy": true, 00:10:45.859 "nvme_iov_md": false 00:10:45.859 }, 00:10:45.859 "memory_domains": [ 00:10:45.859 { 00:10:45.859 "dma_device_id": "system", 00:10:45.859 "dma_device_type": 1 00:10:45.859 }, 00:10:45.859 { 00:10:45.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.859 "dma_device_type": 2 00:10:45.859 } 00:10:45.859 ], 00:10:45.859 "driver_specific": {} 00:10:45.859 } 00:10:45.859 ] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.859 BaseBdev4 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.859 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.119 [ 00:10:46.119 { 00:10:46.119 "name": "BaseBdev4", 00:10:46.119 "aliases": [ 00:10:46.119 "cc4a1a41-94fa-4db6-9a62-2099a98c02f6" 00:10:46.119 ], 00:10:46.119 "product_name": "Malloc disk", 00:10:46.119 "block_size": 512, 00:10:46.119 "num_blocks": 65536, 00:10:46.119 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:46.119 "assigned_rate_limits": { 00:10:46.119 "rw_ios_per_sec": 0, 00:10:46.119 "rw_mbytes_per_sec": 0, 00:10:46.119 "r_mbytes_per_sec": 0, 00:10:46.119 "w_mbytes_per_sec": 0 00:10:46.119 }, 00:10:46.119 "claimed": false, 00:10:46.119 "zoned": false, 00:10:46.119 "supported_io_types": { 00:10:46.119 "read": true, 00:10:46.119 "write": true, 00:10:46.119 "unmap": true, 00:10:46.119 "flush": true, 00:10:46.119 "reset": true, 00:10:46.119 "nvme_admin": false, 00:10:46.119 "nvme_io": false, 00:10:46.119 "nvme_io_md": false, 00:10:46.119 "write_zeroes": true, 00:10:46.119 "zcopy": true, 00:10:46.119 "get_zone_info": false, 00:10:46.119 "zone_management": false, 00:10:46.119 "zone_append": false, 00:10:46.119 "compare": false, 00:10:46.119 "compare_and_write": false, 00:10:46.119 "abort": true, 00:10:46.119 "seek_hole": false, 00:10:46.119 "seek_data": false, 00:10:46.119 "copy": true, 00:10:46.119 "nvme_iov_md": false 00:10:46.119 }, 00:10:46.119 "memory_domains": [ 00:10:46.119 { 00:10:46.119 "dma_device_id": "system", 00:10:46.119 "dma_device_type": 1 00:10:46.119 }, 00:10:46.119 { 00:10:46.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.119 "dma_device_type": 2 00:10:46.119 } 00:10:46.119 ], 00:10:46.119 "driver_specific": {} 00:10:46.119 } 00:10:46.119 ] 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.119 [2024-11-28 18:51:15.471715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.119 [2024-11-28 18:51:15.471759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.119 [2024-11-28 18:51:15.471777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.119 [2024-11-28 18:51:15.473617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.119 [2024-11-28 18:51:15.473669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.119 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.119 "name": "Existed_Raid", 00:10:46.119 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:46.119 "strip_size_kb": 64, 00:10:46.119 "state": "configuring", 00:10:46.119 "raid_level": "concat", 00:10:46.119 "superblock": true, 00:10:46.119 "num_base_bdevs": 4, 00:10:46.119 "num_base_bdevs_discovered": 3, 00:10:46.119 "num_base_bdevs_operational": 4, 00:10:46.119 "base_bdevs_list": [ 00:10:46.119 { 00:10:46.119 "name": "BaseBdev1", 00:10:46.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.119 "is_configured": false, 00:10:46.119 "data_offset": 0, 00:10:46.119 "data_size": 0 00:10:46.119 }, 00:10:46.119 { 00:10:46.119 "name": "BaseBdev2", 00:10:46.119 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:46.119 "is_configured": true, 00:10:46.119 "data_offset": 2048, 00:10:46.119 "data_size": 63488 00:10:46.119 }, 00:10:46.119 { 00:10:46.119 "name": "BaseBdev3", 00:10:46.120 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:46.120 "is_configured": true, 00:10:46.120 "data_offset": 2048, 00:10:46.120 "data_size": 63488 00:10:46.120 }, 00:10:46.120 { 00:10:46.120 "name": "BaseBdev4", 00:10:46.120 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:46.120 "is_configured": true, 00:10:46.120 "data_offset": 2048, 00:10:46.120 "data_size": 63488 00:10:46.120 } 00:10:46.120 ] 00:10:46.120 }' 00:10:46.120 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.120 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 [2024-11-28 18:51:15.867780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.380 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.380 "name": "Existed_Raid", 00:10:46.380 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:46.380 "strip_size_kb": 64, 00:10:46.380 "state": "configuring", 00:10:46.380 "raid_level": "concat", 00:10:46.380 "superblock": true, 00:10:46.380 "num_base_bdevs": 4, 00:10:46.380 "num_base_bdevs_discovered": 2, 00:10:46.380 "num_base_bdevs_operational": 4, 00:10:46.380 "base_bdevs_list": [ 00:10:46.381 { 00:10:46.381 "name": "BaseBdev1", 00:10:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.381 "is_configured": false, 00:10:46.381 "data_offset": 0, 00:10:46.381 "data_size": 0 00:10:46.381 }, 00:10:46.381 { 00:10:46.381 "name": null, 00:10:46.381 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:46.381 "is_configured": false, 00:10:46.381 "data_offset": 0, 00:10:46.381 "data_size": 63488 00:10:46.381 }, 00:10:46.381 { 00:10:46.381 "name": "BaseBdev3", 00:10:46.381 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:46.381 "is_configured": true, 00:10:46.381 "data_offset": 2048, 00:10:46.381 "data_size": 63488 00:10:46.381 }, 00:10:46.381 { 00:10:46.381 "name": "BaseBdev4", 00:10:46.381 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:46.381 "is_configured": true, 00:10:46.381 "data_offset": 2048, 00:10:46.381 "data_size": 63488 00:10:46.381 } 00:10:46.381 ] 00:10:46.381 }' 00:10:46.381 18:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.381 18:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 [2024-11-28 18:51:16.338869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.951 BaseBdev1 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.951 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 [ 00:10:46.951 { 00:10:46.951 "name": "BaseBdev1", 00:10:46.951 "aliases": [ 00:10:46.951 "405cb27f-d965-47d5-9690-7bec7dbd1b44" 00:10:46.951 ], 00:10:46.951 "product_name": "Malloc disk", 00:10:46.951 "block_size": 512, 00:10:46.951 "num_blocks": 65536, 00:10:46.951 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:46.951 "assigned_rate_limits": { 00:10:46.951 "rw_ios_per_sec": 0, 00:10:46.951 "rw_mbytes_per_sec": 0, 00:10:46.951 "r_mbytes_per_sec": 0, 00:10:46.951 "w_mbytes_per_sec": 0 00:10:46.951 }, 00:10:46.951 "claimed": true, 00:10:46.951 "claim_type": "exclusive_write", 00:10:46.951 "zoned": false, 00:10:46.951 "supported_io_types": { 00:10:46.951 "read": true, 00:10:46.951 "write": true, 00:10:46.951 "unmap": true, 00:10:46.951 "flush": true, 00:10:46.951 "reset": true, 00:10:46.951 "nvme_admin": false, 00:10:46.952 "nvme_io": false, 00:10:46.952 "nvme_io_md": false, 00:10:46.952 "write_zeroes": true, 00:10:46.952 "zcopy": true, 00:10:46.952 "get_zone_info": false, 00:10:46.952 "zone_management": false, 00:10:46.952 "zone_append": false, 00:10:46.952 "compare": false, 00:10:46.952 "compare_and_write": false, 00:10:46.952 "abort": true, 00:10:46.952 "seek_hole": false, 00:10:46.952 "seek_data": false, 00:10:46.952 "copy": true, 00:10:46.952 "nvme_iov_md": false 00:10:46.952 }, 00:10:46.952 "memory_domains": [ 00:10:46.952 { 00:10:46.952 "dma_device_id": "system", 00:10:46.952 "dma_device_type": 1 00:10:46.952 }, 00:10:46.952 { 00:10:46.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.952 "dma_device_type": 2 00:10:46.952 } 00:10:46.952 ], 00:10:46.952 "driver_specific": {} 00:10:46.952 } 00:10:46.952 ] 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.952 "name": "Existed_Raid", 00:10:46.952 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:46.952 "strip_size_kb": 64, 00:10:46.952 "state": "configuring", 00:10:46.952 "raid_level": "concat", 00:10:46.952 "superblock": true, 00:10:46.952 "num_base_bdevs": 4, 00:10:46.952 "num_base_bdevs_discovered": 3, 00:10:46.952 "num_base_bdevs_operational": 4, 00:10:46.952 "base_bdevs_list": [ 00:10:46.952 { 00:10:46.952 "name": "BaseBdev1", 00:10:46.952 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:46.952 "is_configured": true, 00:10:46.952 "data_offset": 2048, 00:10:46.952 "data_size": 63488 00:10:46.952 }, 00:10:46.952 { 00:10:46.952 "name": null, 00:10:46.952 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:46.952 "is_configured": false, 00:10:46.952 "data_offset": 0, 00:10:46.952 "data_size": 63488 00:10:46.952 }, 00:10:46.952 { 00:10:46.952 "name": "BaseBdev3", 00:10:46.952 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:46.952 "is_configured": true, 00:10:46.952 "data_offset": 2048, 00:10:46.952 "data_size": 63488 00:10:46.952 }, 00:10:46.952 { 00:10:46.952 "name": "BaseBdev4", 00:10:46.952 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:46.952 "is_configured": true, 00:10:46.952 "data_offset": 2048, 00:10:46.952 "data_size": 63488 00:10:46.952 } 00:10:46.952 ] 00:10:46.952 }' 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.952 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.213 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.213 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.213 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.213 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.213 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.473 [2024-11-28 18:51:16.835054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.473 "name": "Existed_Raid", 00:10:47.473 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:47.473 "strip_size_kb": 64, 00:10:47.473 "state": "configuring", 00:10:47.473 "raid_level": "concat", 00:10:47.473 "superblock": true, 00:10:47.473 "num_base_bdevs": 4, 00:10:47.473 "num_base_bdevs_discovered": 2, 00:10:47.473 "num_base_bdevs_operational": 4, 00:10:47.473 "base_bdevs_list": [ 00:10:47.473 { 00:10:47.473 "name": "BaseBdev1", 00:10:47.473 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:47.473 "is_configured": true, 00:10:47.473 "data_offset": 2048, 00:10:47.473 "data_size": 63488 00:10:47.473 }, 00:10:47.473 { 00:10:47.473 "name": null, 00:10:47.473 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:47.473 "is_configured": false, 00:10:47.473 "data_offset": 0, 00:10:47.473 "data_size": 63488 00:10:47.473 }, 00:10:47.473 { 00:10:47.473 "name": null, 00:10:47.473 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:47.473 "is_configured": false, 00:10:47.473 "data_offset": 0, 00:10:47.473 "data_size": 63488 00:10:47.473 }, 00:10:47.473 { 00:10:47.473 "name": "BaseBdev4", 00:10:47.473 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:47.473 "is_configured": true, 00:10:47.473 "data_offset": 2048, 00:10:47.473 "data_size": 63488 00:10:47.473 } 00:10:47.473 ] 00:10:47.473 }' 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.473 18:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.734 [2024-11-28 18:51:17.323215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.734 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.995 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.995 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.995 "name": "Existed_Raid", 00:10:47.995 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:47.995 "strip_size_kb": 64, 00:10:47.995 "state": "configuring", 00:10:47.995 "raid_level": "concat", 00:10:47.995 "superblock": true, 00:10:47.995 "num_base_bdevs": 4, 00:10:47.995 "num_base_bdevs_discovered": 3, 00:10:47.995 "num_base_bdevs_operational": 4, 00:10:47.995 "base_bdevs_list": [ 00:10:47.995 { 00:10:47.995 "name": "BaseBdev1", 00:10:47.995 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:47.995 "is_configured": true, 00:10:47.995 "data_offset": 2048, 00:10:47.995 "data_size": 63488 00:10:47.995 }, 00:10:47.995 { 00:10:47.995 "name": null, 00:10:47.995 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:47.995 "is_configured": false, 00:10:47.995 "data_offset": 0, 00:10:47.995 "data_size": 63488 00:10:47.995 }, 00:10:47.995 { 00:10:47.995 "name": "BaseBdev3", 00:10:47.995 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:47.995 "is_configured": true, 00:10:47.995 "data_offset": 2048, 00:10:47.995 "data_size": 63488 00:10:47.995 }, 00:10:47.995 { 00:10:47.995 "name": "BaseBdev4", 00:10:47.995 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:47.995 "is_configured": true, 00:10:47.995 "data_offset": 2048, 00:10:47.995 "data_size": 63488 00:10:47.995 } 00:10:47.995 ] 00:10:47.995 }' 00:10:47.995 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.995 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.255 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.255 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.255 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.256 [2024-11-28 18:51:17.815376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.256 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.516 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.516 "name": "Existed_Raid", 00:10:48.516 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:48.516 "strip_size_kb": 64, 00:10:48.516 "state": "configuring", 00:10:48.516 "raid_level": "concat", 00:10:48.516 "superblock": true, 00:10:48.516 "num_base_bdevs": 4, 00:10:48.516 "num_base_bdevs_discovered": 2, 00:10:48.516 "num_base_bdevs_operational": 4, 00:10:48.516 "base_bdevs_list": [ 00:10:48.516 { 00:10:48.516 "name": null, 00:10:48.516 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:48.516 "is_configured": false, 00:10:48.516 "data_offset": 0, 00:10:48.516 "data_size": 63488 00:10:48.516 }, 00:10:48.516 { 00:10:48.516 "name": null, 00:10:48.516 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:48.516 "is_configured": false, 00:10:48.516 "data_offset": 0, 00:10:48.516 "data_size": 63488 00:10:48.516 }, 00:10:48.516 { 00:10:48.516 "name": "BaseBdev3", 00:10:48.516 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:48.516 "is_configured": true, 00:10:48.516 "data_offset": 2048, 00:10:48.516 "data_size": 63488 00:10:48.516 }, 00:10:48.516 { 00:10:48.516 "name": "BaseBdev4", 00:10:48.516 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:48.516 "is_configured": true, 00:10:48.516 "data_offset": 2048, 00:10:48.516 "data_size": 63488 00:10:48.516 } 00:10:48.516 ] 00:10:48.516 }' 00:10:48.516 18:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.516 18:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.777 [2024-11-28 18:51:18.265858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.777 "name": "Existed_Raid", 00:10:48.777 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:48.777 "strip_size_kb": 64, 00:10:48.777 "state": "configuring", 00:10:48.777 "raid_level": "concat", 00:10:48.777 "superblock": true, 00:10:48.777 "num_base_bdevs": 4, 00:10:48.777 "num_base_bdevs_discovered": 3, 00:10:48.777 "num_base_bdevs_operational": 4, 00:10:48.777 "base_bdevs_list": [ 00:10:48.777 { 00:10:48.777 "name": null, 00:10:48.777 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:48.777 "is_configured": false, 00:10:48.777 "data_offset": 0, 00:10:48.777 "data_size": 63488 00:10:48.777 }, 00:10:48.777 { 00:10:48.777 "name": "BaseBdev2", 00:10:48.777 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:48.777 "is_configured": true, 00:10:48.777 "data_offset": 2048, 00:10:48.777 "data_size": 63488 00:10:48.777 }, 00:10:48.777 { 00:10:48.777 "name": "BaseBdev3", 00:10:48.777 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:48.777 "is_configured": true, 00:10:48.777 "data_offset": 2048, 00:10:48.777 "data_size": 63488 00:10:48.777 }, 00:10:48.777 { 00:10:48.777 "name": "BaseBdev4", 00:10:48.777 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:48.777 "is_configured": true, 00:10:48.777 "data_offset": 2048, 00:10:48.777 "data_size": 63488 00:10:48.777 } 00:10:48.777 ] 00:10:48.777 }' 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.777 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.348 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 405cb27f-d965-47d5-9690-7bec7dbd1b44 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.349 [2024-11-28 18:51:18.796960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:49.349 [2024-11-28 18:51:18.797229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.349 [2024-11-28 18:51:18.797292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.349 [2024-11-28 18:51:18.797562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:49.349 [2024-11-28 18:51:18.797712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.349 [2024-11-28 18:51:18.797750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.349 NewBaseBdev 00:10:49.349 [2024-11-28 18:51:18.797879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.349 [ 00:10:49.349 { 00:10:49.349 "name": "NewBaseBdev", 00:10:49.349 "aliases": [ 00:10:49.349 "405cb27f-d965-47d5-9690-7bec7dbd1b44" 00:10:49.349 ], 00:10:49.349 "product_name": "Malloc disk", 00:10:49.349 "block_size": 512, 00:10:49.349 "num_blocks": 65536, 00:10:49.349 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:49.349 "assigned_rate_limits": { 00:10:49.349 "rw_ios_per_sec": 0, 00:10:49.349 "rw_mbytes_per_sec": 0, 00:10:49.349 "r_mbytes_per_sec": 0, 00:10:49.349 "w_mbytes_per_sec": 0 00:10:49.349 }, 00:10:49.349 "claimed": true, 00:10:49.349 "claim_type": "exclusive_write", 00:10:49.349 "zoned": false, 00:10:49.349 "supported_io_types": { 00:10:49.349 "read": true, 00:10:49.349 "write": true, 00:10:49.349 "unmap": true, 00:10:49.349 "flush": true, 00:10:49.349 "reset": true, 00:10:49.349 "nvme_admin": false, 00:10:49.349 "nvme_io": false, 00:10:49.349 "nvme_io_md": false, 00:10:49.349 "write_zeroes": true, 00:10:49.349 "zcopy": true, 00:10:49.349 "get_zone_info": false, 00:10:49.349 "zone_management": false, 00:10:49.349 "zone_append": false, 00:10:49.349 "compare": false, 00:10:49.349 "compare_and_write": false, 00:10:49.349 "abort": true, 00:10:49.349 "seek_hole": false, 00:10:49.349 "seek_data": false, 00:10:49.349 "copy": true, 00:10:49.349 "nvme_iov_md": false 00:10:49.349 }, 00:10:49.349 "memory_domains": [ 00:10:49.349 { 00:10:49.349 "dma_device_id": "system", 00:10:49.349 "dma_device_type": 1 00:10:49.349 }, 00:10:49.349 { 00:10:49.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.349 "dma_device_type": 2 00:10:49.349 } 00:10:49.349 ], 00:10:49.349 "driver_specific": {} 00:10:49.349 } 00:10:49.349 ] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.349 "name": "Existed_Raid", 00:10:49.349 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:49.349 "strip_size_kb": 64, 00:10:49.349 "state": "online", 00:10:49.349 "raid_level": "concat", 00:10:49.349 "superblock": true, 00:10:49.349 "num_base_bdevs": 4, 00:10:49.349 "num_base_bdevs_discovered": 4, 00:10:49.349 "num_base_bdevs_operational": 4, 00:10:49.349 "base_bdevs_list": [ 00:10:49.349 { 00:10:49.349 "name": "NewBaseBdev", 00:10:49.349 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:49.349 "is_configured": true, 00:10:49.349 "data_offset": 2048, 00:10:49.349 "data_size": 63488 00:10:49.349 }, 00:10:49.349 { 00:10:49.349 "name": "BaseBdev2", 00:10:49.349 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:49.349 "is_configured": true, 00:10:49.349 "data_offset": 2048, 00:10:49.349 "data_size": 63488 00:10:49.349 }, 00:10:49.349 { 00:10:49.349 "name": "BaseBdev3", 00:10:49.349 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:49.349 "is_configured": true, 00:10:49.349 "data_offset": 2048, 00:10:49.349 "data_size": 63488 00:10:49.349 }, 00:10:49.349 { 00:10:49.349 "name": "BaseBdev4", 00:10:49.349 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:49.349 "is_configured": true, 00:10:49.349 "data_offset": 2048, 00:10:49.349 "data_size": 63488 00:10:49.349 } 00:10:49.349 ] 00:10:49.349 }' 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.349 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.610 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.871 [2024-11-28 18:51:19.217435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.871 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.871 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.871 "name": "Existed_Raid", 00:10:49.871 "aliases": [ 00:10:49.871 "9f96ae71-a3dc-49f1-bd05-e7a3829baa77" 00:10:49.871 ], 00:10:49.871 "product_name": "Raid Volume", 00:10:49.871 "block_size": 512, 00:10:49.871 "num_blocks": 253952, 00:10:49.871 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:49.871 "assigned_rate_limits": { 00:10:49.871 "rw_ios_per_sec": 0, 00:10:49.871 "rw_mbytes_per_sec": 0, 00:10:49.871 "r_mbytes_per_sec": 0, 00:10:49.871 "w_mbytes_per_sec": 0 00:10:49.871 }, 00:10:49.871 "claimed": false, 00:10:49.871 "zoned": false, 00:10:49.871 "supported_io_types": { 00:10:49.871 "read": true, 00:10:49.871 "write": true, 00:10:49.871 "unmap": true, 00:10:49.871 "flush": true, 00:10:49.871 "reset": true, 00:10:49.872 "nvme_admin": false, 00:10:49.872 "nvme_io": false, 00:10:49.872 "nvme_io_md": false, 00:10:49.872 "write_zeroes": true, 00:10:49.872 "zcopy": false, 00:10:49.872 "get_zone_info": false, 00:10:49.872 "zone_management": false, 00:10:49.872 "zone_append": false, 00:10:49.872 "compare": false, 00:10:49.872 "compare_and_write": false, 00:10:49.872 "abort": false, 00:10:49.872 "seek_hole": false, 00:10:49.872 "seek_data": false, 00:10:49.872 "copy": false, 00:10:49.872 "nvme_iov_md": false 00:10:49.872 }, 00:10:49.872 "memory_domains": [ 00:10:49.872 { 00:10:49.872 "dma_device_id": "system", 00:10:49.872 "dma_device_type": 1 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.872 "dma_device_type": 2 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "system", 00:10:49.872 "dma_device_type": 1 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.872 "dma_device_type": 2 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "system", 00:10:49.872 "dma_device_type": 1 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.872 "dma_device_type": 2 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "system", 00:10:49.872 "dma_device_type": 1 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.872 "dma_device_type": 2 00:10:49.872 } 00:10:49.872 ], 00:10:49.872 "driver_specific": { 00:10:49.872 "raid": { 00:10:49.872 "uuid": "9f96ae71-a3dc-49f1-bd05-e7a3829baa77", 00:10:49.872 "strip_size_kb": 64, 00:10:49.872 "state": "online", 00:10:49.872 "raid_level": "concat", 00:10:49.872 "superblock": true, 00:10:49.872 "num_base_bdevs": 4, 00:10:49.872 "num_base_bdevs_discovered": 4, 00:10:49.872 "num_base_bdevs_operational": 4, 00:10:49.872 "base_bdevs_list": [ 00:10:49.872 { 00:10:49.872 "name": "NewBaseBdev", 00:10:49.872 "uuid": "405cb27f-d965-47d5-9690-7bec7dbd1b44", 00:10:49.872 "is_configured": true, 00:10:49.872 "data_offset": 2048, 00:10:49.872 "data_size": 63488 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "name": "BaseBdev2", 00:10:49.872 "uuid": "27371409-54dc-475f-a549-241bf4327a61", 00:10:49.872 "is_configured": true, 00:10:49.872 "data_offset": 2048, 00:10:49.872 "data_size": 63488 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "name": "BaseBdev3", 00:10:49.872 "uuid": "c6e40783-0ee7-44b2-8e3d-e27423e97db6", 00:10:49.872 "is_configured": true, 00:10:49.872 "data_offset": 2048, 00:10:49.872 "data_size": 63488 00:10:49.872 }, 00:10:49.872 { 00:10:49.872 "name": "BaseBdev4", 00:10:49.872 "uuid": "cc4a1a41-94fa-4db6-9a62-2099a98c02f6", 00:10:49.872 "is_configured": true, 00:10:49.872 "data_offset": 2048, 00:10:49.872 "data_size": 63488 00:10:49.872 } 00:10:49.872 ] 00:10:49.872 } 00:10:49.872 } 00:10:49.872 }' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.872 BaseBdev2 00:10:49.872 BaseBdev3 00:10:49.872 BaseBdev4' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.872 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.133 [2024-11-28 18:51:19.489189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.133 [2024-11-28 18:51:19.489215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.133 [2024-11-28 18:51:19.489281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.133 [2024-11-28 18:51:19.489345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.133 [2024-11-28 18:51:19.489361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84328 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84328 ']' 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84328 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84328 00:10:50.133 killing process with pid 84328 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84328' 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84328 00:10:50.133 [2024-11-28 18:51:19.529823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.133 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84328 00:10:50.133 [2024-11-28 18:51:19.568910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.393 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.393 00:10:50.393 real 0m8.818s 00:10:50.393 user 0m15.204s 00:10:50.393 sys 0m1.713s 00:10:50.393 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.393 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 ************************************ 00:10:50.393 END TEST raid_state_function_test_sb 00:10:50.393 ************************************ 00:10:50.393 18:51:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:50.393 18:51:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.393 18:51:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.393 18:51:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 ************************************ 00:10:50.393 START TEST raid_superblock_test 00:10:50.393 ************************************ 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:50.393 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84965 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84965 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84965 ']' 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.394 18:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.394 [2024-11-28 18:51:19.949286] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:50.394 [2024-11-28 18:51:19.949509] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84965 ] 00:10:50.654 [2024-11-28 18:51:20.083762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:50.654 [2024-11-28 18:51:20.122994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.655 [2024-11-28 18:51:20.148184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.655 [2024-11-28 18:51:20.189480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.655 [2024-11-28 18:51:20.189592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.224 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 malloc1 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 [2024-11-28 18:51:20.785556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.225 [2024-11-28 18:51:20.785657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.225 [2024-11-28 18:51:20.785697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:51.225 [2024-11-28 18:51:20.785724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.225 [2024-11-28 18:51:20.787835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.225 [2024-11-28 18:51:20.787905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.225 pt1 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 malloc2 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 [2024-11-28 18:51:20.818172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.225 [2024-11-28 18:51:20.818278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.225 [2024-11-28 18:51:20.818312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:51.225 [2024-11-28 18:51:20.818338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.225 [2024-11-28 18:51:20.820349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.225 [2024-11-28 18:51:20.820432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.225 pt2 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.485 malloc3 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.485 [2024-11-28 18:51:20.846833] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.485 [2024-11-28 18:51:20.846934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.485 [2024-11-28 18:51:20.846971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.485 [2024-11-28 18:51:20.846997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.485 [2024-11-28 18:51:20.849044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.485 [2024-11-28 18:51:20.849113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.485 pt3 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.485 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 malloc4 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 [2024-11-28 18:51:20.895784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.486 [2024-11-28 18:51:20.895921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.486 [2024-11-28 18:51:20.895985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:51.486 [2024-11-28 18:51:20.896038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.486 [2024-11-28 18:51:20.899159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.486 [2024-11-28 18:51:20.899257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.486 pt4 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 [2024-11-28 18:51:20.907953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.486 [2024-11-28 18:51:20.909919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.486 [2024-11-28 18:51:20.910022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.486 [2024-11-28 18:51:20.910070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.486 [2024-11-28 18:51:20.910231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:51.486 [2024-11-28 18:51:20.910243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.486 [2024-11-28 18:51:20.910531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:51.486 [2024-11-28 18:51:20.910680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:51.486 [2024-11-28 18:51:20.910700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:51.486 [2024-11-28 18:51:20.910832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.486 "name": "raid_bdev1", 00:10:51.486 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:51.486 "strip_size_kb": 64, 00:10:51.486 "state": "online", 00:10:51.486 "raid_level": "concat", 00:10:51.486 "superblock": true, 00:10:51.486 "num_base_bdevs": 4, 00:10:51.486 "num_base_bdevs_discovered": 4, 00:10:51.486 "num_base_bdevs_operational": 4, 00:10:51.486 "base_bdevs_list": [ 00:10:51.486 { 00:10:51.486 "name": "pt1", 00:10:51.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.486 "is_configured": true, 00:10:51.486 "data_offset": 2048, 00:10:51.486 "data_size": 63488 00:10:51.486 }, 00:10:51.486 { 00:10:51.486 "name": "pt2", 00:10:51.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.486 "is_configured": true, 00:10:51.486 "data_offset": 2048, 00:10:51.486 "data_size": 63488 00:10:51.486 }, 00:10:51.486 { 00:10:51.486 "name": "pt3", 00:10:51.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.486 "is_configured": true, 00:10:51.486 "data_offset": 2048, 00:10:51.486 "data_size": 63488 00:10:51.486 }, 00:10:51.486 { 00:10:51.486 "name": "pt4", 00:10:51.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.486 "is_configured": true, 00:10:51.486 "data_offset": 2048, 00:10:51.486 "data_size": 63488 00:10:51.486 } 00:10:51.486 ] 00:10:51.486 }' 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.486 18:51:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 [2024-11-28 18:51:21.416343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.057 "name": "raid_bdev1", 00:10:52.057 "aliases": [ 00:10:52.057 "a72a911d-533d-4bf8-b64d-d117966456bc" 00:10:52.057 ], 00:10:52.057 "product_name": "Raid Volume", 00:10:52.057 "block_size": 512, 00:10:52.057 "num_blocks": 253952, 00:10:52.057 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:52.057 "assigned_rate_limits": { 00:10:52.057 "rw_ios_per_sec": 0, 00:10:52.057 "rw_mbytes_per_sec": 0, 00:10:52.057 "r_mbytes_per_sec": 0, 00:10:52.057 "w_mbytes_per_sec": 0 00:10:52.057 }, 00:10:52.057 "claimed": false, 00:10:52.057 "zoned": false, 00:10:52.057 "supported_io_types": { 00:10:52.057 "read": true, 00:10:52.057 "write": true, 00:10:52.057 "unmap": true, 00:10:52.057 "flush": true, 00:10:52.057 "reset": true, 00:10:52.057 "nvme_admin": false, 00:10:52.057 "nvme_io": false, 00:10:52.057 "nvme_io_md": false, 00:10:52.057 "write_zeroes": true, 00:10:52.057 "zcopy": false, 00:10:52.057 "get_zone_info": false, 00:10:52.057 "zone_management": false, 00:10:52.057 "zone_append": false, 00:10:52.057 "compare": false, 00:10:52.057 "compare_and_write": false, 00:10:52.057 "abort": false, 00:10:52.057 "seek_hole": false, 00:10:52.057 "seek_data": false, 00:10:52.057 "copy": false, 00:10:52.057 "nvme_iov_md": false 00:10:52.057 }, 00:10:52.057 "memory_domains": [ 00:10:52.057 { 00:10:52.057 "dma_device_id": "system", 00:10:52.057 "dma_device_type": 1 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.057 "dma_device_type": 2 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "system", 00:10:52.057 "dma_device_type": 1 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.057 "dma_device_type": 2 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "system", 00:10:52.057 "dma_device_type": 1 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.057 "dma_device_type": 2 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "system", 00:10:52.057 "dma_device_type": 1 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.057 "dma_device_type": 2 00:10:52.057 } 00:10:52.057 ], 00:10:52.057 "driver_specific": { 00:10:52.057 "raid": { 00:10:52.057 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:52.057 "strip_size_kb": 64, 00:10:52.057 "state": "online", 00:10:52.057 "raid_level": "concat", 00:10:52.057 "superblock": true, 00:10:52.057 "num_base_bdevs": 4, 00:10:52.057 "num_base_bdevs_discovered": 4, 00:10:52.057 "num_base_bdevs_operational": 4, 00:10:52.057 "base_bdevs_list": [ 00:10:52.057 { 00:10:52.057 "name": "pt1", 00:10:52.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.057 "is_configured": true, 00:10:52.057 "data_offset": 2048, 00:10:52.057 "data_size": 63488 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "name": "pt2", 00:10:52.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.057 "is_configured": true, 00:10:52.057 "data_offset": 2048, 00:10:52.057 "data_size": 63488 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "name": "pt3", 00:10:52.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.057 "is_configured": true, 00:10:52.057 "data_offset": 2048, 00:10:52.057 "data_size": 63488 00:10:52.057 }, 00:10:52.057 { 00:10:52.057 "name": "pt4", 00:10:52.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.057 "is_configured": true, 00:10:52.057 "data_offset": 2048, 00:10:52.057 "data_size": 63488 00:10:52.057 } 00:10:52.057 ] 00:10:52.057 } 00:10:52.057 } 00:10:52.057 }' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.057 pt2 00:10:52.057 pt3 00:10:52.057 pt4' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.057 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 [2024-11-28 18:51:21.724373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a72a911d-533d-4bf8-b64d-d117966456bc 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a72a911d-533d-4bf8-b64d-d117966456bc ']' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 [2024-11-28 18:51:21.768104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.318 [2024-11-28 18:51:21.768166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.318 [2024-11-28 18:51:21.768246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.318 [2024-11-28 18:51:21.768319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.318 [2024-11-28 18:51:21.768337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.318 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:52.319 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.579 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 [2024-11-28 18:51:21.932204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.580 [2024-11-28 18:51:21.934076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.580 [2024-11-28 18:51:21.934158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:52.580 [2024-11-28 18:51:21.934216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:52.580 [2024-11-28 18:51:21.934289] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.580 [2024-11-28 18:51:21.934370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.580 [2024-11-28 18:51:21.934422] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:52.580 [2024-11-28 18:51:21.934485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:52.580 [2024-11-28 18:51:21.934533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.580 [2024-11-28 18:51:21.934570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:52.580 request: 00:10:52.580 { 00:10:52.580 "name": "raid_bdev1", 00:10:52.580 "raid_level": "concat", 00:10:52.580 "base_bdevs": [ 00:10:52.580 "malloc1", 00:10:52.580 "malloc2", 00:10:52.580 "malloc3", 00:10:52.580 "malloc4" 00:10:52.580 ], 00:10:52.580 "strip_size_kb": 64, 00:10:52.580 "superblock": false, 00:10:52.580 "method": "bdev_raid_create", 00:10:52.580 "req_id": 1 00:10:52.580 } 00:10:52.580 Got JSON-RPC error response 00:10:52.580 response: 00:10:52.580 { 00:10:52.580 "code": -17, 00:10:52.580 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.580 } 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 [2024-11-28 18:51:21.988179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.580 [2024-11-28 18:51:21.988228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.580 [2024-11-28 18:51:21.988243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.580 [2024-11-28 18:51:21.988253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.580 [2024-11-28 18:51:21.990329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.580 [2024-11-28 18:51:21.990368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.580 [2024-11-28 18:51:21.990440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.580 [2024-11-28 18:51:21.990474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.580 pt1 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.580 18:51:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.580 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.580 "name": "raid_bdev1", 00:10:52.580 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:52.580 "strip_size_kb": 64, 00:10:52.580 "state": "configuring", 00:10:52.580 "raid_level": "concat", 00:10:52.580 "superblock": true, 00:10:52.580 "num_base_bdevs": 4, 00:10:52.580 "num_base_bdevs_discovered": 1, 00:10:52.580 "num_base_bdevs_operational": 4, 00:10:52.580 "base_bdevs_list": [ 00:10:52.580 { 00:10:52.580 "name": "pt1", 00:10:52.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.580 "is_configured": true, 00:10:52.580 "data_offset": 2048, 00:10:52.580 "data_size": 63488 00:10:52.580 }, 00:10:52.580 { 00:10:52.580 "name": null, 00:10:52.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.580 "is_configured": false, 00:10:52.580 "data_offset": 2048, 00:10:52.580 "data_size": 63488 00:10:52.580 }, 00:10:52.580 { 00:10:52.580 "name": null, 00:10:52.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.580 "is_configured": false, 00:10:52.580 "data_offset": 2048, 00:10:52.580 "data_size": 63488 00:10:52.580 }, 00:10:52.580 { 00:10:52.580 "name": null, 00:10:52.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.580 "is_configured": false, 00:10:52.580 "data_offset": 2048, 00:10:52.580 "data_size": 63488 00:10:52.580 } 00:10:52.580 ] 00:10:52.580 }' 00:10:52.580 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.580 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.840 [2024-11-28 18:51:22.400302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.840 [2024-11-28 18:51:22.400407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.840 [2024-11-28 18:51:22.400455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:52.840 [2024-11-28 18:51:22.400507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.840 [2024-11-28 18:51:22.400944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.840 [2024-11-28 18:51:22.401005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.840 [2024-11-28 18:51:22.401100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.840 [2024-11-28 18:51:22.401154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.840 pt2 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.840 [2024-11-28 18:51:22.412302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.840 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.100 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.100 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.100 "name": "raid_bdev1", 00:10:53.100 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:53.100 "strip_size_kb": 64, 00:10:53.100 "state": "configuring", 00:10:53.100 "raid_level": "concat", 00:10:53.100 "superblock": true, 00:10:53.100 "num_base_bdevs": 4, 00:10:53.100 "num_base_bdevs_discovered": 1, 00:10:53.100 "num_base_bdevs_operational": 4, 00:10:53.100 "base_bdevs_list": [ 00:10:53.100 { 00:10:53.100 "name": "pt1", 00:10:53.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.100 "is_configured": true, 00:10:53.100 "data_offset": 2048, 00:10:53.100 "data_size": 63488 00:10:53.100 }, 00:10:53.100 { 00:10:53.100 "name": null, 00:10:53.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.100 "is_configured": false, 00:10:53.100 "data_offset": 0, 00:10:53.100 "data_size": 63488 00:10:53.100 }, 00:10:53.100 { 00:10:53.100 "name": null, 00:10:53.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.100 "is_configured": false, 00:10:53.100 "data_offset": 2048, 00:10:53.100 "data_size": 63488 00:10:53.100 }, 00:10:53.100 { 00:10:53.100 "name": null, 00:10:53.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.100 "is_configured": false, 00:10:53.100 "data_offset": 2048, 00:10:53.100 "data_size": 63488 00:10:53.100 } 00:10:53.100 ] 00:10:53.100 }' 00:10:53.100 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.100 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.359 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:53.359 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 [2024-11-28 18:51:22.892455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.360 [2024-11-28 18:51:22.892512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.360 [2024-11-28 18:51:22.892530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:53.360 [2024-11-28 18:51:22.892539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.360 [2024-11-28 18:51:22.892908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.360 [2024-11-28 18:51:22.892924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.360 [2024-11-28 18:51:22.892992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.360 [2024-11-28 18:51:22.893011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.360 pt2 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 [2024-11-28 18:51:22.904456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.360 [2024-11-28 18:51:22.904501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.360 [2024-11-28 18:51:22.904517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:53.360 [2024-11-28 18:51:22.904525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.360 [2024-11-28 18:51:22.904843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.360 [2024-11-28 18:51:22.904859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.360 [2024-11-28 18:51:22.904920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:53.360 [2024-11-28 18:51:22.904941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.360 pt3 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 [2024-11-28 18:51:22.916436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:53.360 [2024-11-28 18:51:22.916488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.360 [2024-11-28 18:51:22.916502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:53.360 [2024-11-28 18:51:22.916510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.360 [2024-11-28 18:51:22.916805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.360 [2024-11-28 18:51:22.916819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:53.360 [2024-11-28 18:51:22.916869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:53.360 [2024-11-28 18:51:22.916885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:53.360 [2024-11-28 18:51:22.916975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:53.360 [2024-11-28 18:51:22.916982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.360 [2024-11-28 18:51:22.917198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.360 [2024-11-28 18:51:22.917319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:53.360 [2024-11-28 18:51:22.917337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:53.360 [2024-11-28 18:51:22.917424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.360 pt4 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.360 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.624 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.624 "name": "raid_bdev1", 00:10:53.624 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:53.624 "strip_size_kb": 64, 00:10:53.624 "state": "online", 00:10:53.624 "raid_level": "concat", 00:10:53.624 "superblock": true, 00:10:53.624 "num_base_bdevs": 4, 00:10:53.624 "num_base_bdevs_discovered": 4, 00:10:53.624 "num_base_bdevs_operational": 4, 00:10:53.624 "base_bdevs_list": [ 00:10:53.624 { 00:10:53.624 "name": "pt1", 00:10:53.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.624 "is_configured": true, 00:10:53.624 "data_offset": 2048, 00:10:53.624 "data_size": 63488 00:10:53.624 }, 00:10:53.624 { 00:10:53.624 "name": "pt2", 00:10:53.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.624 "is_configured": true, 00:10:53.624 "data_offset": 2048, 00:10:53.624 "data_size": 63488 00:10:53.624 }, 00:10:53.624 { 00:10:53.624 "name": "pt3", 00:10:53.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.624 "is_configured": true, 00:10:53.624 "data_offset": 2048, 00:10:53.624 "data_size": 63488 00:10:53.624 }, 00:10:53.624 { 00:10:53.624 "name": "pt4", 00:10:53.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.624 "is_configured": true, 00:10:53.624 "data_offset": 2048, 00:10:53.624 "data_size": 63488 00:10:53.624 } 00:10:53.624 ] 00:10:53.624 }' 00:10:53.624 18:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.624 18:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.898 [2024-11-28 18:51:23.304852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.898 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.898 "name": "raid_bdev1", 00:10:53.898 "aliases": [ 00:10:53.898 "a72a911d-533d-4bf8-b64d-d117966456bc" 00:10:53.898 ], 00:10:53.898 "product_name": "Raid Volume", 00:10:53.898 "block_size": 512, 00:10:53.898 "num_blocks": 253952, 00:10:53.898 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:53.898 "assigned_rate_limits": { 00:10:53.898 "rw_ios_per_sec": 0, 00:10:53.898 "rw_mbytes_per_sec": 0, 00:10:53.898 "r_mbytes_per_sec": 0, 00:10:53.898 "w_mbytes_per_sec": 0 00:10:53.898 }, 00:10:53.898 "claimed": false, 00:10:53.898 "zoned": false, 00:10:53.898 "supported_io_types": { 00:10:53.898 "read": true, 00:10:53.898 "write": true, 00:10:53.898 "unmap": true, 00:10:53.898 "flush": true, 00:10:53.898 "reset": true, 00:10:53.898 "nvme_admin": false, 00:10:53.898 "nvme_io": false, 00:10:53.898 "nvme_io_md": false, 00:10:53.898 "write_zeroes": true, 00:10:53.898 "zcopy": false, 00:10:53.898 "get_zone_info": false, 00:10:53.898 "zone_management": false, 00:10:53.898 "zone_append": false, 00:10:53.898 "compare": false, 00:10:53.898 "compare_and_write": false, 00:10:53.898 "abort": false, 00:10:53.898 "seek_hole": false, 00:10:53.898 "seek_data": false, 00:10:53.898 "copy": false, 00:10:53.898 "nvme_iov_md": false 00:10:53.898 }, 00:10:53.898 "memory_domains": [ 00:10:53.898 { 00:10:53.898 "dma_device_id": "system", 00:10:53.898 "dma_device_type": 1 00:10:53.898 }, 00:10:53.898 { 00:10:53.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.898 "dma_device_type": 2 00:10:53.898 }, 00:10:53.898 { 00:10:53.898 "dma_device_id": "system", 00:10:53.898 "dma_device_type": 1 00:10:53.898 }, 00:10:53.898 { 00:10:53.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.898 "dma_device_type": 2 00:10:53.898 }, 00:10:53.898 { 00:10:53.898 "dma_device_id": "system", 00:10:53.898 "dma_device_type": 1 00:10:53.898 }, 00:10:53.898 { 00:10:53.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.899 "dma_device_type": 2 00:10:53.899 }, 00:10:53.899 { 00:10:53.899 "dma_device_id": "system", 00:10:53.899 "dma_device_type": 1 00:10:53.899 }, 00:10:53.899 { 00:10:53.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.899 "dma_device_type": 2 00:10:53.899 } 00:10:53.899 ], 00:10:53.899 "driver_specific": { 00:10:53.899 "raid": { 00:10:53.899 "uuid": "a72a911d-533d-4bf8-b64d-d117966456bc", 00:10:53.899 "strip_size_kb": 64, 00:10:53.899 "state": "online", 00:10:53.899 "raid_level": "concat", 00:10:53.899 "superblock": true, 00:10:53.899 "num_base_bdevs": 4, 00:10:53.899 "num_base_bdevs_discovered": 4, 00:10:53.899 "num_base_bdevs_operational": 4, 00:10:53.899 "base_bdevs_list": [ 00:10:53.899 { 00:10:53.899 "name": "pt1", 00:10:53.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.899 "is_configured": true, 00:10:53.899 "data_offset": 2048, 00:10:53.899 "data_size": 63488 00:10:53.899 }, 00:10:53.899 { 00:10:53.899 "name": "pt2", 00:10:53.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.899 "is_configured": true, 00:10:53.899 "data_offset": 2048, 00:10:53.899 "data_size": 63488 00:10:53.899 }, 00:10:53.899 { 00:10:53.899 "name": "pt3", 00:10:53.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.899 "is_configured": true, 00:10:53.899 "data_offset": 2048, 00:10:53.899 "data_size": 63488 00:10:53.899 }, 00:10:53.899 { 00:10:53.899 "name": "pt4", 00:10:53.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.899 "is_configured": true, 00:10:53.899 "data_offset": 2048, 00:10:53.899 "data_size": 63488 00:10:53.899 } 00:10:53.899 ] 00:10:53.899 } 00:10:53.899 } 00:10:53.899 }' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:53.899 pt2 00:10:53.899 pt3 00:10:53.899 pt4' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.899 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.173 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:54.174 [2024-11-28 18:51:23.656949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a72a911d-533d-4bf8-b64d-d117966456bc '!=' a72a911d-533d-4bf8-b64d-d117966456bc ']' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84965 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84965 ']' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84965 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84965 00:10:54.174 killing process with pid 84965 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84965' 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 84965 00:10:54.174 [2024-11-28 18:51:23.728186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.174 [2024-11-28 18:51:23.728260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.174 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 84965 00:10:54.174 [2024-11-28 18:51:23.728338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.174 [2024-11-28 18:51:23.728347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:54.174 [2024-11-28 18:51:23.770722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.434 18:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.434 00:10:54.434 real 0m4.134s 00:10:54.434 user 0m6.546s 00:10:54.434 sys 0m0.883s 00:10:54.434 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.434 ************************************ 00:10:54.434 END TEST raid_superblock_test 00:10:54.434 ************************************ 00:10:54.434 18:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.694 18:51:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:54.694 18:51:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.694 18:51:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.694 18:51:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.694 ************************************ 00:10:54.694 START TEST raid_read_error_test 00:10:54.694 ************************************ 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h1YtOpqNhq 00:10:54.694 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85218 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85218 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85218 ']' 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.695 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.695 [2024-11-28 18:51:24.175406] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:54.695 [2024-11-28 18:51:24.175556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85218 ] 00:10:54.955 [2024-11-28 18:51:24.311564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.955 [2024-11-28 18:51:24.350568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.955 [2024-11-28 18:51:24.375865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.955 [2024-11-28 18:51:24.418171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.955 [2024-11-28 18:51:24.418284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 BaseBdev1_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 true 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 [2024-11-28 18:51:25.030893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.525 [2024-11-28 18:51:25.030952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.525 [2024-11-28 18:51:25.030977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.525 [2024-11-28 18:51:25.030990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.525 [2024-11-28 18:51:25.033140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.525 [2024-11-28 18:51:25.033241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.525 BaseBdev1 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 BaseBdev2_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 true 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 [2024-11-28 18:51:25.071604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.525 [2024-11-28 18:51:25.071716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.525 [2024-11-28 18:51:25.071735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.525 [2024-11-28 18:51:25.071745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.525 [2024-11-28 18:51:25.073748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.525 [2024-11-28 18:51:25.073784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.525 BaseBdev2 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 BaseBdev3_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 true 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.525 [2024-11-28 18:51:25.112217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.525 [2024-11-28 18:51:25.112266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.525 [2024-11-28 18:51:25.112283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.525 [2024-11-28 18:51:25.112293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.525 [2024-11-28 18:51:25.114372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.525 [2024-11-28 18:51:25.114412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.525 BaseBdev3 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.525 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.785 BaseBdev4_malloc 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.785 true 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.785 [2024-11-28 18:51:25.176163] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.785 [2024-11-28 18:51:25.176270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.785 [2024-11-28 18:51:25.176294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.785 [2024-11-28 18:51:25.176306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.785 [2024-11-28 18:51:25.178622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.785 [2024-11-28 18:51:25.178667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.785 BaseBdev4 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.785 [2024-11-28 18:51:25.188189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.785 [2024-11-28 18:51:25.189991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.785 [2024-11-28 18:51:25.190062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.785 [2024-11-28 18:51:25.190123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.785 [2024-11-28 18:51:25.190313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.785 [2024-11-28 18:51:25.190325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.785 [2024-11-28 18:51:25.190603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:55.785 [2024-11-28 18:51:25.190754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.785 [2024-11-28 18:51:25.190768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:55.785 [2024-11-28 18:51:25.190900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.785 "name": "raid_bdev1", 00:10:55.785 "uuid": "884b247c-c925-4862-9c35-780505a8caad", 00:10:55.785 "strip_size_kb": 64, 00:10:55.785 "state": "online", 00:10:55.785 "raid_level": "concat", 00:10:55.785 "superblock": true, 00:10:55.785 "num_base_bdevs": 4, 00:10:55.785 "num_base_bdevs_discovered": 4, 00:10:55.785 "num_base_bdevs_operational": 4, 00:10:55.785 "base_bdevs_list": [ 00:10:55.785 { 00:10:55.785 "name": "BaseBdev1", 00:10:55.785 "uuid": "2fb61b67-ca5e-5b28-a5e0-3dc93c19ece8", 00:10:55.785 "is_configured": true, 00:10:55.785 "data_offset": 2048, 00:10:55.785 "data_size": 63488 00:10:55.785 }, 00:10:55.785 { 00:10:55.785 "name": "BaseBdev2", 00:10:55.785 "uuid": "b8794f56-8a7d-5959-91b4-f012c06d8501", 00:10:55.785 "is_configured": true, 00:10:55.785 "data_offset": 2048, 00:10:55.785 "data_size": 63488 00:10:55.785 }, 00:10:55.785 { 00:10:55.785 "name": "BaseBdev3", 00:10:55.785 "uuid": "c5fa731e-f9cc-5c7d-b6be-7cf4eedb2a41", 00:10:55.785 "is_configured": true, 00:10:55.785 "data_offset": 2048, 00:10:55.785 "data_size": 63488 00:10:55.785 }, 00:10:55.785 { 00:10:55.785 "name": "BaseBdev4", 00:10:55.785 "uuid": "27e1afcd-ce16-59e0-8059-27b9fd1806b2", 00:10:55.785 "is_configured": true, 00:10:55.785 "data_offset": 2048, 00:10:55.785 "data_size": 63488 00:10:55.785 } 00:10:55.785 ] 00:10:55.785 }' 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.785 18:51:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.355 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.355 18:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.355 [2024-11-28 18:51:25.752668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.295 "name": "raid_bdev1", 00:10:57.295 "uuid": "884b247c-c925-4862-9c35-780505a8caad", 00:10:57.295 "strip_size_kb": 64, 00:10:57.295 "state": "online", 00:10:57.295 "raid_level": "concat", 00:10:57.295 "superblock": true, 00:10:57.295 "num_base_bdevs": 4, 00:10:57.295 "num_base_bdevs_discovered": 4, 00:10:57.295 "num_base_bdevs_operational": 4, 00:10:57.295 "base_bdevs_list": [ 00:10:57.295 { 00:10:57.295 "name": "BaseBdev1", 00:10:57.295 "uuid": "2fb61b67-ca5e-5b28-a5e0-3dc93c19ece8", 00:10:57.295 "is_configured": true, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 }, 00:10:57.295 { 00:10:57.295 "name": "BaseBdev2", 00:10:57.295 "uuid": "b8794f56-8a7d-5959-91b4-f012c06d8501", 00:10:57.295 "is_configured": true, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 }, 00:10:57.295 { 00:10:57.295 "name": "BaseBdev3", 00:10:57.295 "uuid": "c5fa731e-f9cc-5c7d-b6be-7cf4eedb2a41", 00:10:57.295 "is_configured": true, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 }, 00:10:57.295 { 00:10:57.295 "name": "BaseBdev4", 00:10:57.295 "uuid": "27e1afcd-ce16-59e0-8059-27b9fd1806b2", 00:10:57.295 "is_configured": true, 00:10:57.295 "data_offset": 2048, 00:10:57.295 "data_size": 63488 00:10:57.295 } 00:10:57.295 ] 00:10:57.295 }' 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.295 18:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.556 [2024-11-28 18:51:27.147408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.556 [2024-11-28 18:51:27.147514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.556 [2024-11-28 18:51:27.150116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.556 [2024-11-28 18:51:27.150212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.556 [2024-11-28 18:51:27.150274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.556 [2024-11-28 18:51:27.150318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.556 { 00:10:57.556 "results": [ 00:10:57.556 { 00:10:57.556 "job": "raid_bdev1", 00:10:57.556 "core_mask": "0x1", 00:10:57.556 "workload": "randrw", 00:10:57.556 "percentage": 50, 00:10:57.556 "status": "finished", 00:10:57.556 "queue_depth": 1, 00:10:57.556 "io_size": 131072, 00:10:57.556 "runtime": 1.392974, 00:10:57.556 "iops": 16761.978328382294, 00:10:57.556 "mibps": 2095.247291047787, 00:10:57.556 "io_failed": 1, 00:10:57.556 "io_timeout": 0, 00:10:57.556 "avg_latency_us": 82.43835225464646, 00:10:57.556 "min_latency_us": 24.990848078096402, 00:10:57.556 "max_latency_us": 1356.646038525233 00:10:57.556 } 00:10:57.556 ], 00:10:57.556 "core_count": 1 00:10:57.556 } 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85218 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85218 ']' 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85218 00:10:57.556 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85218 00:10:57.816 killing process with pid 85218 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85218' 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85218 00:10:57.816 [2024-11-28 18:51:27.198544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.816 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85218 00:10:57.816 [2024-11-28 18:51:27.232520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h1YtOpqNhq 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.077 ************************************ 00:10:58.077 END TEST raid_read_error_test 00:10:58.077 ************************************ 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:58.077 00:10:58.077 real 0m3.381s 00:10:58.077 user 0m4.262s 00:10:58.077 sys 0m0.579s 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.077 18:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 18:51:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:58.077 18:51:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.077 18:51:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.077 18:51:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 ************************************ 00:10:58.077 START TEST raid_write_error_test 00:10:58.077 ************************************ 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NQL3bVSsNC 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85348 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85348 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85348 ']' 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.077 18:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 [2024-11-28 18:51:27.621834] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:58.077 [2024-11-28 18:51:27.622070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85348 ] 00:10:58.335 [2024-11-28 18:51:27.755773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:58.336 [2024-11-28 18:51:27.794480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.336 [2024-11-28 18:51:27.818937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.336 [2024-11-28 18:51:27.860949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.336 [2024-11-28 18:51:27.861059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.905 BaseBdev1_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.905 true 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.905 [2024-11-28 18:51:28.469302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.905 [2024-11-28 18:51:28.469357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.905 [2024-11-28 18:51:28.469375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.905 [2024-11-28 18:51:28.469393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.905 [2024-11-28 18:51:28.471545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.905 [2024-11-28 18:51:28.471583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.905 BaseBdev1 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.905 BaseBdev2_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.905 true 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.905 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-11-28 18:51:28.509868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.166 [2024-11-28 18:51:28.509918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.166 [2024-11-28 18:51:28.509934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.166 [2024-11-28 18:51:28.509944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.166 [2024-11-28 18:51:28.511985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.166 [2024-11-28 18:51:28.512024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.166 BaseBdev2 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 BaseBdev3_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 true 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-11-28 18:51:28.550477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.166 [2024-11-28 18:51:28.550576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.166 [2024-11-28 18:51:28.550596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.166 [2024-11-28 18:51:28.550606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.166 [2024-11-28 18:51:28.552615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.166 [2024-11-28 18:51:28.552653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.166 BaseBdev3 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 BaseBdev4_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 true 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-11-28 18:51:28.614658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:59.166 [2024-11-28 18:51:28.614730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.166 [2024-11-28 18:51:28.614757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.166 [2024-11-28 18:51:28.614774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.166 [2024-11-28 18:51:28.617439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.166 [2024-11-28 18:51:28.617480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:59.166 BaseBdev4 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-11-28 18:51:28.626660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.166 [2024-11-28 18:51:28.628432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.166 [2024-11-28 18:51:28.628526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.166 [2024-11-28 18:51:28.628580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.166 [2024-11-28 18:51:28.628776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.166 [2024-11-28 18:51:28.628795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.166 [2024-11-28 18:51:28.629033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:59.166 [2024-11-28 18:51:28.629163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.166 [2024-11-28 18:51:28.629173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.166 [2024-11-28 18:51:28.629299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.167 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.167 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.167 "name": "raid_bdev1", 00:10:59.167 "uuid": "a6803b64-15ac-4efe-a08f-b456a8e4b71f", 00:10:59.167 "strip_size_kb": 64, 00:10:59.167 "state": "online", 00:10:59.167 "raid_level": "concat", 00:10:59.167 "superblock": true, 00:10:59.167 "num_base_bdevs": 4, 00:10:59.167 "num_base_bdevs_discovered": 4, 00:10:59.167 "num_base_bdevs_operational": 4, 00:10:59.167 "base_bdevs_list": [ 00:10:59.167 { 00:10:59.167 "name": "BaseBdev1", 00:10:59.167 "uuid": "b6c89c38-b95a-5753-b914-29402ffd5c20", 00:10:59.167 "is_configured": true, 00:10:59.167 "data_offset": 2048, 00:10:59.167 "data_size": 63488 00:10:59.167 }, 00:10:59.167 { 00:10:59.167 "name": "BaseBdev2", 00:10:59.167 "uuid": "15788715-e2aa-52b3-b1b9-f2804c40f88e", 00:10:59.167 "is_configured": true, 00:10:59.167 "data_offset": 2048, 00:10:59.167 "data_size": 63488 00:10:59.167 }, 00:10:59.167 { 00:10:59.167 "name": "BaseBdev3", 00:10:59.167 "uuid": "3ddfeca1-d212-5b1c-a612-9013f8efbf87", 00:10:59.167 "is_configured": true, 00:10:59.167 "data_offset": 2048, 00:10:59.167 "data_size": 63488 00:10:59.167 }, 00:10:59.167 { 00:10:59.167 "name": "BaseBdev4", 00:10:59.167 "uuid": "f5db0f55-8036-56bc-9cea-f193388d52a4", 00:10:59.167 "is_configured": true, 00:10:59.167 "data_offset": 2048, 00:10:59.167 "data_size": 63488 00:10:59.167 } 00:10:59.167 ] 00:10:59.167 }' 00:10:59.167 18:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.167 18:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.737 18:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.737 18:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.737 [2024-11-28 18:51:29.147121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.679 "name": "raid_bdev1", 00:11:00.679 "uuid": "a6803b64-15ac-4efe-a08f-b456a8e4b71f", 00:11:00.679 "strip_size_kb": 64, 00:11:00.679 "state": "online", 00:11:00.679 "raid_level": "concat", 00:11:00.679 "superblock": true, 00:11:00.679 "num_base_bdevs": 4, 00:11:00.679 "num_base_bdevs_discovered": 4, 00:11:00.679 "num_base_bdevs_operational": 4, 00:11:00.679 "base_bdevs_list": [ 00:11:00.679 { 00:11:00.679 "name": "BaseBdev1", 00:11:00.679 "uuid": "b6c89c38-b95a-5753-b914-29402ffd5c20", 00:11:00.679 "is_configured": true, 00:11:00.679 "data_offset": 2048, 00:11:00.679 "data_size": 63488 00:11:00.679 }, 00:11:00.679 { 00:11:00.679 "name": "BaseBdev2", 00:11:00.679 "uuid": "15788715-e2aa-52b3-b1b9-f2804c40f88e", 00:11:00.679 "is_configured": true, 00:11:00.679 "data_offset": 2048, 00:11:00.679 "data_size": 63488 00:11:00.679 }, 00:11:00.679 { 00:11:00.679 "name": "BaseBdev3", 00:11:00.679 "uuid": "3ddfeca1-d212-5b1c-a612-9013f8efbf87", 00:11:00.679 "is_configured": true, 00:11:00.679 "data_offset": 2048, 00:11:00.679 "data_size": 63488 00:11:00.679 }, 00:11:00.679 { 00:11:00.679 "name": "BaseBdev4", 00:11:00.679 "uuid": "f5db0f55-8036-56bc-9cea-f193388d52a4", 00:11:00.679 "is_configured": true, 00:11:00.679 "data_offset": 2048, 00:11:00.679 "data_size": 63488 00:11:00.679 } 00:11:00.679 ] 00:11:00.679 }' 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.679 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.939 [2024-11-28 18:51:30.533447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.939 [2024-11-28 18:51:30.533479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.939 [2024-11-28 18:51:30.535943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.939 [2024-11-28 18:51:30.536013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.939 [2024-11-28 18:51:30.536057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.939 [2024-11-28 18:51:30.536068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:00.939 { 00:11:00.939 "results": [ 00:11:00.939 { 00:11:00.939 "job": "raid_bdev1", 00:11:00.939 "core_mask": "0x1", 00:11:00.939 "workload": "randrw", 00:11:00.939 "percentage": 50, 00:11:00.939 "status": "finished", 00:11:00.939 "queue_depth": 1, 00:11:00.939 "io_size": 131072, 00:11:00.939 "runtime": 1.384444, 00:11:00.939 "iops": 16806.024656829744, 00:11:00.939 "mibps": 2100.753082103718, 00:11:00.939 "io_failed": 1, 00:11:00.939 "io_timeout": 0, 00:11:00.939 "avg_latency_us": 82.23883084113398, 00:11:00.939 "min_latency_us": 24.76771550597054, 00:11:00.939 "max_latency_us": 1356.646038525233 00:11:00.939 } 00:11:00.939 ], 00:11:00.939 "core_count": 1 00:11:00.939 } 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85348 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85348 ']' 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85348 00:11:00.939 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85348 00:11:01.199 killing process with pid 85348 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85348' 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85348 00:11:01.199 [2024-11-28 18:51:30.581876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.199 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85348 00:11:01.199 [2024-11-28 18:51:30.616481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NQL3bVSsNC 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.460 ************************************ 00:11:01.460 END TEST raid_write_error_test 00:11:01.460 ************************************ 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:01.460 00:11:01.460 real 0m3.316s 00:11:01.460 user 0m4.163s 00:11:01.460 sys 0m0.547s 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.460 18:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 18:51:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.460 18:51:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:01.460 18:51:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.460 18:51:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.460 18:51:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 ************************************ 00:11:01.460 START TEST raid_state_function_test 00:11:01.460 ************************************ 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.460 Process raid pid: 85475 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85475 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85475' 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85475 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 85475 ']' 00:11:01.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.460 18:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 [2024-11-28 18:51:31.007742] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:01.460 [2024-11-28 18:51:31.007866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.721 [2024-11-28 18:51:31.141632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:01.721 [2024-11-28 18:51:31.174930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.721 [2024-11-28 18:51:31.199178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.721 [2024-11-28 18:51:31.241188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.721 [2024-11-28 18:51:31.241308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.292 [2024-11-28 18:51:31.824678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.292 [2024-11-28 18:51:31.824782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.292 [2024-11-28 18:51:31.824832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.292 [2024-11-28 18:51:31.824855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.292 [2024-11-28 18:51:31.824917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.292 [2024-11-28 18:51:31.824955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.292 [2024-11-28 18:51:31.824984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.292 [2024-11-28 18:51:31.825012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.292 "name": "Existed_Raid", 00:11:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.292 "strip_size_kb": 0, 00:11:02.292 "state": "configuring", 00:11:02.292 "raid_level": "raid1", 00:11:02.292 "superblock": false, 00:11:02.292 "num_base_bdevs": 4, 00:11:02.292 "num_base_bdevs_discovered": 0, 00:11:02.292 "num_base_bdevs_operational": 4, 00:11:02.292 "base_bdevs_list": [ 00:11:02.292 { 00:11:02.292 "name": "BaseBdev1", 00:11:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.292 "is_configured": false, 00:11:02.292 "data_offset": 0, 00:11:02.292 "data_size": 0 00:11:02.292 }, 00:11:02.292 { 00:11:02.292 "name": "BaseBdev2", 00:11:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.292 "is_configured": false, 00:11:02.292 "data_offset": 0, 00:11:02.292 "data_size": 0 00:11:02.292 }, 00:11:02.292 { 00:11:02.292 "name": "BaseBdev3", 00:11:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.292 "is_configured": false, 00:11:02.292 "data_offset": 0, 00:11:02.292 "data_size": 0 00:11:02.292 }, 00:11:02.292 { 00:11:02.292 "name": "BaseBdev4", 00:11:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.292 "is_configured": false, 00:11:02.292 "data_offset": 0, 00:11:02.292 "data_size": 0 00:11:02.292 } 00:11:02.292 ] 00:11:02.292 }' 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.292 18:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 [2024-11-28 18:51:32.244693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.863 [2024-11-28 18:51:32.244766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 [2024-11-28 18:51:32.256739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.863 [2024-11-28 18:51:32.256810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.863 [2024-11-28 18:51:32.256838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.863 [2024-11-28 18:51:32.256857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.863 [2024-11-28 18:51:32.256875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.863 [2024-11-28 18:51:32.256893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.863 [2024-11-28 18:51:32.256911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.863 [2024-11-28 18:51:32.256928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 [2024-11-28 18:51:32.277370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.863 BaseBdev1 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 [ 00:11:02.863 { 00:11:02.863 "name": "BaseBdev1", 00:11:02.863 "aliases": [ 00:11:02.863 "b1635eeb-6db9-4ac2-a382-b4d1972799de" 00:11:02.863 ], 00:11:02.863 "product_name": "Malloc disk", 00:11:02.863 "block_size": 512, 00:11:02.863 "num_blocks": 65536, 00:11:02.863 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:02.863 "assigned_rate_limits": { 00:11:02.863 "rw_ios_per_sec": 0, 00:11:02.863 "rw_mbytes_per_sec": 0, 00:11:02.863 "r_mbytes_per_sec": 0, 00:11:02.863 "w_mbytes_per_sec": 0 00:11:02.863 }, 00:11:02.863 "claimed": true, 00:11:02.863 "claim_type": "exclusive_write", 00:11:02.863 "zoned": false, 00:11:02.863 "supported_io_types": { 00:11:02.863 "read": true, 00:11:02.863 "write": true, 00:11:02.863 "unmap": true, 00:11:02.863 "flush": true, 00:11:02.863 "reset": true, 00:11:02.863 "nvme_admin": false, 00:11:02.863 "nvme_io": false, 00:11:02.863 "nvme_io_md": false, 00:11:02.863 "write_zeroes": true, 00:11:02.863 "zcopy": true, 00:11:02.863 "get_zone_info": false, 00:11:02.863 "zone_management": false, 00:11:02.863 "zone_append": false, 00:11:02.863 "compare": false, 00:11:02.863 "compare_and_write": false, 00:11:02.863 "abort": true, 00:11:02.863 "seek_hole": false, 00:11:02.863 "seek_data": false, 00:11:02.863 "copy": true, 00:11:02.863 "nvme_iov_md": false 00:11:02.863 }, 00:11:02.863 "memory_domains": [ 00:11:02.863 { 00:11:02.863 "dma_device_id": "system", 00:11:02.863 "dma_device_type": 1 00:11:02.863 }, 00:11:02.863 { 00:11:02.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.863 "dma_device_type": 2 00:11:02.863 } 00:11:02.863 ], 00:11:02.863 "driver_specific": {} 00:11:02.863 } 00:11:02.863 ] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.863 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.864 "name": "Existed_Raid", 00:11:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.864 "strip_size_kb": 0, 00:11:02.864 "state": "configuring", 00:11:02.864 "raid_level": "raid1", 00:11:02.864 "superblock": false, 00:11:02.864 "num_base_bdevs": 4, 00:11:02.864 "num_base_bdevs_discovered": 1, 00:11:02.864 "num_base_bdevs_operational": 4, 00:11:02.864 "base_bdevs_list": [ 00:11:02.864 { 00:11:02.864 "name": "BaseBdev1", 00:11:02.864 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:02.864 "is_configured": true, 00:11:02.864 "data_offset": 0, 00:11:02.864 "data_size": 65536 00:11:02.864 }, 00:11:02.864 { 00:11:02.864 "name": "BaseBdev2", 00:11:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.864 "is_configured": false, 00:11:02.864 "data_offset": 0, 00:11:02.864 "data_size": 0 00:11:02.864 }, 00:11:02.864 { 00:11:02.864 "name": "BaseBdev3", 00:11:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.864 "is_configured": false, 00:11:02.864 "data_offset": 0, 00:11:02.864 "data_size": 0 00:11:02.864 }, 00:11:02.864 { 00:11:02.864 "name": "BaseBdev4", 00:11:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.864 "is_configured": false, 00:11:02.864 "data_offset": 0, 00:11:02.864 "data_size": 0 00:11:02.864 } 00:11:02.864 ] 00:11:02.864 }' 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.864 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.433 [2024-11-28 18:51:32.761536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.433 [2024-11-28 18:51:32.761647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.433 [2024-11-28 18:51:32.773571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.433 [2024-11-28 18:51:32.775380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.433 [2024-11-28 18:51:32.775421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.433 [2024-11-28 18:51:32.775441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.433 [2024-11-28 18:51:32.775449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.433 [2024-11-28 18:51:32.775456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.433 [2024-11-28 18:51:32.775463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.433 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.434 "name": "Existed_Raid", 00:11:03.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.434 "strip_size_kb": 0, 00:11:03.434 "state": "configuring", 00:11:03.434 "raid_level": "raid1", 00:11:03.434 "superblock": false, 00:11:03.434 "num_base_bdevs": 4, 00:11:03.434 "num_base_bdevs_discovered": 1, 00:11:03.434 "num_base_bdevs_operational": 4, 00:11:03.434 "base_bdevs_list": [ 00:11:03.434 { 00:11:03.434 "name": "BaseBdev1", 00:11:03.434 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:03.434 "is_configured": true, 00:11:03.434 "data_offset": 0, 00:11:03.434 "data_size": 65536 00:11:03.434 }, 00:11:03.434 { 00:11:03.434 "name": "BaseBdev2", 00:11:03.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.434 "is_configured": false, 00:11:03.434 "data_offset": 0, 00:11:03.434 "data_size": 0 00:11:03.434 }, 00:11:03.434 { 00:11:03.434 "name": "BaseBdev3", 00:11:03.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.434 "is_configured": false, 00:11:03.434 "data_offset": 0, 00:11:03.434 "data_size": 0 00:11:03.434 }, 00:11:03.434 { 00:11:03.434 "name": "BaseBdev4", 00:11:03.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.434 "is_configured": false, 00:11:03.434 "data_offset": 0, 00:11:03.434 "data_size": 0 00:11:03.434 } 00:11:03.434 ] 00:11:03.434 }' 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.434 18:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.694 [2024-11-28 18:51:33.152537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.694 BaseBdev2 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.694 [ 00:11:03.694 { 00:11:03.694 "name": "BaseBdev2", 00:11:03.694 "aliases": [ 00:11:03.694 "637d0559-c9ae-4f61-8d59-e855394279a1" 00:11:03.694 ], 00:11:03.694 "product_name": "Malloc disk", 00:11:03.694 "block_size": 512, 00:11:03.694 "num_blocks": 65536, 00:11:03.694 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:03.694 "assigned_rate_limits": { 00:11:03.694 "rw_ios_per_sec": 0, 00:11:03.694 "rw_mbytes_per_sec": 0, 00:11:03.694 "r_mbytes_per_sec": 0, 00:11:03.694 "w_mbytes_per_sec": 0 00:11:03.694 }, 00:11:03.694 "claimed": true, 00:11:03.694 "claim_type": "exclusive_write", 00:11:03.694 "zoned": false, 00:11:03.694 "supported_io_types": { 00:11:03.694 "read": true, 00:11:03.694 "write": true, 00:11:03.694 "unmap": true, 00:11:03.694 "flush": true, 00:11:03.694 "reset": true, 00:11:03.694 "nvme_admin": false, 00:11:03.694 "nvme_io": false, 00:11:03.694 "nvme_io_md": false, 00:11:03.694 "write_zeroes": true, 00:11:03.694 "zcopy": true, 00:11:03.694 "get_zone_info": false, 00:11:03.694 "zone_management": false, 00:11:03.694 "zone_append": false, 00:11:03.694 "compare": false, 00:11:03.694 "compare_and_write": false, 00:11:03.694 "abort": true, 00:11:03.694 "seek_hole": false, 00:11:03.694 "seek_data": false, 00:11:03.694 "copy": true, 00:11:03.694 "nvme_iov_md": false 00:11:03.694 }, 00:11:03.694 "memory_domains": [ 00:11:03.694 { 00:11:03.694 "dma_device_id": "system", 00:11:03.694 "dma_device_type": 1 00:11:03.694 }, 00:11:03.694 { 00:11:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.694 "dma_device_type": 2 00:11:03.694 } 00:11:03.694 ], 00:11:03.694 "driver_specific": {} 00:11:03.694 } 00:11:03.694 ] 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.694 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.695 "name": "Existed_Raid", 00:11:03.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.695 "strip_size_kb": 0, 00:11:03.695 "state": "configuring", 00:11:03.695 "raid_level": "raid1", 00:11:03.695 "superblock": false, 00:11:03.695 "num_base_bdevs": 4, 00:11:03.695 "num_base_bdevs_discovered": 2, 00:11:03.695 "num_base_bdevs_operational": 4, 00:11:03.695 "base_bdevs_list": [ 00:11:03.695 { 00:11:03.695 "name": "BaseBdev1", 00:11:03.695 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:03.695 "is_configured": true, 00:11:03.695 "data_offset": 0, 00:11:03.695 "data_size": 65536 00:11:03.695 }, 00:11:03.695 { 00:11:03.695 "name": "BaseBdev2", 00:11:03.695 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:03.695 "is_configured": true, 00:11:03.695 "data_offset": 0, 00:11:03.695 "data_size": 65536 00:11:03.695 }, 00:11:03.695 { 00:11:03.695 "name": "BaseBdev3", 00:11:03.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.695 "is_configured": false, 00:11:03.695 "data_offset": 0, 00:11:03.695 "data_size": 0 00:11:03.695 }, 00:11:03.695 { 00:11:03.695 "name": "BaseBdev4", 00:11:03.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.695 "is_configured": false, 00:11:03.695 "data_offset": 0, 00:11:03.695 "data_size": 0 00:11:03.695 } 00:11:03.695 ] 00:11:03.695 }' 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.695 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 [2024-11-28 18:51:33.651585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.265 BaseBdev3 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 [ 00:11:04.265 { 00:11:04.265 "name": "BaseBdev3", 00:11:04.265 "aliases": [ 00:11:04.265 "43fb98f3-f791-4976-8879-b1dd927c3ae9" 00:11:04.265 ], 00:11:04.265 "product_name": "Malloc disk", 00:11:04.265 "block_size": 512, 00:11:04.265 "num_blocks": 65536, 00:11:04.265 "uuid": "43fb98f3-f791-4976-8879-b1dd927c3ae9", 00:11:04.265 "assigned_rate_limits": { 00:11:04.265 "rw_ios_per_sec": 0, 00:11:04.265 "rw_mbytes_per_sec": 0, 00:11:04.265 "r_mbytes_per_sec": 0, 00:11:04.265 "w_mbytes_per_sec": 0 00:11:04.265 }, 00:11:04.265 "claimed": true, 00:11:04.265 "claim_type": "exclusive_write", 00:11:04.265 "zoned": false, 00:11:04.265 "supported_io_types": { 00:11:04.265 "read": true, 00:11:04.265 "write": true, 00:11:04.265 "unmap": true, 00:11:04.265 "flush": true, 00:11:04.265 "reset": true, 00:11:04.265 "nvme_admin": false, 00:11:04.265 "nvme_io": false, 00:11:04.265 "nvme_io_md": false, 00:11:04.265 "write_zeroes": true, 00:11:04.265 "zcopy": true, 00:11:04.265 "get_zone_info": false, 00:11:04.265 "zone_management": false, 00:11:04.265 "zone_append": false, 00:11:04.265 "compare": false, 00:11:04.265 "compare_and_write": false, 00:11:04.265 "abort": true, 00:11:04.265 "seek_hole": false, 00:11:04.265 "seek_data": false, 00:11:04.265 "copy": true, 00:11:04.265 "nvme_iov_md": false 00:11:04.265 }, 00:11:04.265 "memory_domains": [ 00:11:04.265 { 00:11:04.265 "dma_device_id": "system", 00:11:04.265 "dma_device_type": 1 00:11:04.265 }, 00:11:04.265 { 00:11:04.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.265 "dma_device_type": 2 00:11:04.265 } 00:11:04.265 ], 00:11:04.265 "driver_specific": {} 00:11:04.265 } 00:11:04.265 ] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.265 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.265 "name": "Existed_Raid", 00:11:04.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.265 "strip_size_kb": 0, 00:11:04.265 "state": "configuring", 00:11:04.265 "raid_level": "raid1", 00:11:04.265 "superblock": false, 00:11:04.265 "num_base_bdevs": 4, 00:11:04.265 "num_base_bdevs_discovered": 3, 00:11:04.266 "num_base_bdevs_operational": 4, 00:11:04.266 "base_bdevs_list": [ 00:11:04.266 { 00:11:04.266 "name": "BaseBdev1", 00:11:04.266 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:04.266 "is_configured": true, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 65536 00:11:04.266 }, 00:11:04.266 { 00:11:04.266 "name": "BaseBdev2", 00:11:04.266 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:04.266 "is_configured": true, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 65536 00:11:04.266 }, 00:11:04.266 { 00:11:04.266 "name": "BaseBdev3", 00:11:04.266 "uuid": "43fb98f3-f791-4976-8879-b1dd927c3ae9", 00:11:04.266 "is_configured": true, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 65536 00:11:04.266 }, 00:11:04.266 { 00:11:04.266 "name": "BaseBdev4", 00:11:04.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.266 "is_configured": false, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 0 00:11:04.266 } 00:11:04.266 ] 00:11:04.266 }' 00:11:04.266 18:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.266 18:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.526 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.526 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.526 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 [2024-11-28 18:51:34.134738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.787 [2024-11-28 18:51:34.134782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:04.787 [2024-11-28 18:51:34.134792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.787 [2024-11-28 18:51:34.135070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:04.787 [2024-11-28 18:51:34.135231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:04.787 [2024-11-28 18:51:34.135241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:04.787 [2024-11-28 18:51:34.135481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.787 BaseBdev4 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.787 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 [ 00:11:04.788 { 00:11:04.788 "name": "BaseBdev4", 00:11:04.788 "aliases": [ 00:11:04.788 "5011de9b-2cb1-4d4c-a10b-2322d450a93c" 00:11:04.788 ], 00:11:04.788 "product_name": "Malloc disk", 00:11:04.788 "block_size": 512, 00:11:04.788 "num_blocks": 65536, 00:11:04.788 "uuid": "5011de9b-2cb1-4d4c-a10b-2322d450a93c", 00:11:04.788 "assigned_rate_limits": { 00:11:04.788 "rw_ios_per_sec": 0, 00:11:04.788 "rw_mbytes_per_sec": 0, 00:11:04.788 "r_mbytes_per_sec": 0, 00:11:04.788 "w_mbytes_per_sec": 0 00:11:04.788 }, 00:11:04.788 "claimed": true, 00:11:04.788 "claim_type": "exclusive_write", 00:11:04.788 "zoned": false, 00:11:04.788 "supported_io_types": { 00:11:04.788 "read": true, 00:11:04.788 "write": true, 00:11:04.788 "unmap": true, 00:11:04.788 "flush": true, 00:11:04.788 "reset": true, 00:11:04.788 "nvme_admin": false, 00:11:04.788 "nvme_io": false, 00:11:04.788 "nvme_io_md": false, 00:11:04.788 "write_zeroes": true, 00:11:04.788 "zcopy": true, 00:11:04.788 "get_zone_info": false, 00:11:04.788 "zone_management": false, 00:11:04.788 "zone_append": false, 00:11:04.788 "compare": false, 00:11:04.788 "compare_and_write": false, 00:11:04.788 "abort": true, 00:11:04.788 "seek_hole": false, 00:11:04.788 "seek_data": false, 00:11:04.788 "copy": true, 00:11:04.788 "nvme_iov_md": false 00:11:04.788 }, 00:11:04.788 "memory_domains": [ 00:11:04.788 { 00:11:04.788 "dma_device_id": "system", 00:11:04.788 "dma_device_type": 1 00:11:04.788 }, 00:11:04.788 { 00:11:04.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.788 "dma_device_type": 2 00:11:04.788 } 00:11:04.788 ], 00:11:04.788 "driver_specific": {} 00:11:04.788 } 00:11:04.788 ] 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.788 "name": "Existed_Raid", 00:11:04.788 "uuid": "7c2a901a-6ea9-42f7-a73e-e556c6a4ced1", 00:11:04.788 "strip_size_kb": 0, 00:11:04.788 "state": "online", 00:11:04.788 "raid_level": "raid1", 00:11:04.788 "superblock": false, 00:11:04.788 "num_base_bdevs": 4, 00:11:04.788 "num_base_bdevs_discovered": 4, 00:11:04.788 "num_base_bdevs_operational": 4, 00:11:04.788 "base_bdevs_list": [ 00:11:04.788 { 00:11:04.788 "name": "BaseBdev1", 00:11:04.788 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:04.788 "is_configured": true, 00:11:04.788 "data_offset": 0, 00:11:04.788 "data_size": 65536 00:11:04.788 }, 00:11:04.788 { 00:11:04.788 "name": "BaseBdev2", 00:11:04.788 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:04.788 "is_configured": true, 00:11:04.788 "data_offset": 0, 00:11:04.788 "data_size": 65536 00:11:04.788 }, 00:11:04.788 { 00:11:04.788 "name": "BaseBdev3", 00:11:04.788 "uuid": "43fb98f3-f791-4976-8879-b1dd927c3ae9", 00:11:04.788 "is_configured": true, 00:11:04.788 "data_offset": 0, 00:11:04.788 "data_size": 65536 00:11:04.788 }, 00:11:04.788 { 00:11:04.788 "name": "BaseBdev4", 00:11:04.788 "uuid": "5011de9b-2cb1-4d4c-a10b-2322d450a93c", 00:11:04.788 "is_configured": true, 00:11:04.788 "data_offset": 0, 00:11:04.788 "data_size": 65536 00:11:04.788 } 00:11:04.788 ] 00:11:04.788 }' 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.788 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.049 [2024-11-28 18:51:34.587196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.049 "name": "Existed_Raid", 00:11:05.049 "aliases": [ 00:11:05.049 "7c2a901a-6ea9-42f7-a73e-e556c6a4ced1" 00:11:05.049 ], 00:11:05.049 "product_name": "Raid Volume", 00:11:05.049 "block_size": 512, 00:11:05.049 "num_blocks": 65536, 00:11:05.049 "uuid": "7c2a901a-6ea9-42f7-a73e-e556c6a4ced1", 00:11:05.049 "assigned_rate_limits": { 00:11:05.049 "rw_ios_per_sec": 0, 00:11:05.049 "rw_mbytes_per_sec": 0, 00:11:05.049 "r_mbytes_per_sec": 0, 00:11:05.049 "w_mbytes_per_sec": 0 00:11:05.049 }, 00:11:05.049 "claimed": false, 00:11:05.049 "zoned": false, 00:11:05.049 "supported_io_types": { 00:11:05.049 "read": true, 00:11:05.049 "write": true, 00:11:05.049 "unmap": false, 00:11:05.049 "flush": false, 00:11:05.049 "reset": true, 00:11:05.049 "nvme_admin": false, 00:11:05.049 "nvme_io": false, 00:11:05.049 "nvme_io_md": false, 00:11:05.049 "write_zeroes": true, 00:11:05.049 "zcopy": false, 00:11:05.049 "get_zone_info": false, 00:11:05.049 "zone_management": false, 00:11:05.049 "zone_append": false, 00:11:05.049 "compare": false, 00:11:05.049 "compare_and_write": false, 00:11:05.049 "abort": false, 00:11:05.049 "seek_hole": false, 00:11:05.049 "seek_data": false, 00:11:05.049 "copy": false, 00:11:05.049 "nvme_iov_md": false 00:11:05.049 }, 00:11:05.049 "memory_domains": [ 00:11:05.049 { 00:11:05.049 "dma_device_id": "system", 00:11:05.049 "dma_device_type": 1 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.049 "dma_device_type": 2 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "system", 00:11:05.049 "dma_device_type": 1 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.049 "dma_device_type": 2 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "system", 00:11:05.049 "dma_device_type": 1 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.049 "dma_device_type": 2 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "system", 00:11:05.049 "dma_device_type": 1 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.049 "dma_device_type": 2 00:11:05.049 } 00:11:05.049 ], 00:11:05.049 "driver_specific": { 00:11:05.049 "raid": { 00:11:05.049 "uuid": "7c2a901a-6ea9-42f7-a73e-e556c6a4ced1", 00:11:05.049 "strip_size_kb": 0, 00:11:05.049 "state": "online", 00:11:05.049 "raid_level": "raid1", 00:11:05.049 "superblock": false, 00:11:05.049 "num_base_bdevs": 4, 00:11:05.049 "num_base_bdevs_discovered": 4, 00:11:05.049 "num_base_bdevs_operational": 4, 00:11:05.049 "base_bdevs_list": [ 00:11:05.049 { 00:11:05.049 "name": "BaseBdev1", 00:11:05.049 "uuid": "b1635eeb-6db9-4ac2-a382-b4d1972799de", 00:11:05.049 "is_configured": true, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 65536 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "name": "BaseBdev2", 00:11:05.049 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:05.049 "is_configured": true, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 65536 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "name": "BaseBdev3", 00:11:05.049 "uuid": "43fb98f3-f791-4976-8879-b1dd927c3ae9", 00:11:05.049 "is_configured": true, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 65536 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "name": "BaseBdev4", 00:11:05.049 "uuid": "5011de9b-2cb1-4d4c-a10b-2322d450a93c", 00:11:05.049 "is_configured": true, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 65536 00:11:05.049 } 00:11:05.049 ] 00:11:05.049 } 00:11:05.049 } 00:11:05.049 }' 00:11:05.049 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.310 BaseBdev2 00:11:05.310 BaseBdev3 00:11:05.310 BaseBdev4' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 [2024-11-28 18:51:34.891005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.310 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.570 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.570 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.570 "name": "Existed_Raid", 00:11:05.570 "uuid": "7c2a901a-6ea9-42f7-a73e-e556c6a4ced1", 00:11:05.570 "strip_size_kb": 0, 00:11:05.570 "state": "online", 00:11:05.570 "raid_level": "raid1", 00:11:05.570 "superblock": false, 00:11:05.570 "num_base_bdevs": 4, 00:11:05.570 "num_base_bdevs_discovered": 3, 00:11:05.570 "num_base_bdevs_operational": 3, 00:11:05.570 "base_bdevs_list": [ 00:11:05.570 { 00:11:05.570 "name": null, 00:11:05.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.570 "is_configured": false, 00:11:05.570 "data_offset": 0, 00:11:05.570 "data_size": 65536 00:11:05.570 }, 00:11:05.570 { 00:11:05.570 "name": "BaseBdev2", 00:11:05.570 "uuid": "637d0559-c9ae-4f61-8d59-e855394279a1", 00:11:05.570 "is_configured": true, 00:11:05.570 "data_offset": 0, 00:11:05.570 "data_size": 65536 00:11:05.570 }, 00:11:05.570 { 00:11:05.570 "name": "BaseBdev3", 00:11:05.570 "uuid": "43fb98f3-f791-4976-8879-b1dd927c3ae9", 00:11:05.570 "is_configured": true, 00:11:05.570 "data_offset": 0, 00:11:05.570 "data_size": 65536 00:11:05.570 }, 00:11:05.570 { 00:11:05.570 "name": "BaseBdev4", 00:11:05.570 "uuid": "5011de9b-2cb1-4d4c-a10b-2322d450a93c", 00:11:05.570 "is_configured": true, 00:11:05.570 "data_offset": 0, 00:11:05.570 "data_size": 65536 00:11:05.570 } 00:11:05.570 ] 00:11:05.570 }' 00:11:05.570 18:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.570 18:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:05.829 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.829 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.829 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.829 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.830 [2024-11-28 18:51:35.386532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.830 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.089 [2024-11-28 18:51:35.449895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.089 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.089 [2024-11-28 18:51:35.520858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.089 [2024-11-28 18:51:35.520950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.090 [2024-11-28 18:51:35.532071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.090 [2024-11-28 18:51:35.532120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.090 [2024-11-28 18:51:35.532140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 BaseBdev2 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 [ 00:11:06.090 { 00:11:06.090 "name": "BaseBdev2", 00:11:06.090 "aliases": [ 00:11:06.090 "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5" 00:11:06.090 ], 00:11:06.090 "product_name": "Malloc disk", 00:11:06.090 "block_size": 512, 00:11:06.090 "num_blocks": 65536, 00:11:06.090 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:06.090 "assigned_rate_limits": { 00:11:06.090 "rw_ios_per_sec": 0, 00:11:06.090 "rw_mbytes_per_sec": 0, 00:11:06.090 "r_mbytes_per_sec": 0, 00:11:06.090 "w_mbytes_per_sec": 0 00:11:06.090 }, 00:11:06.090 "claimed": false, 00:11:06.090 "zoned": false, 00:11:06.090 "supported_io_types": { 00:11:06.090 "read": true, 00:11:06.090 "write": true, 00:11:06.090 "unmap": true, 00:11:06.090 "flush": true, 00:11:06.090 "reset": true, 00:11:06.090 "nvme_admin": false, 00:11:06.090 "nvme_io": false, 00:11:06.090 "nvme_io_md": false, 00:11:06.090 "write_zeroes": true, 00:11:06.090 "zcopy": true, 00:11:06.090 "get_zone_info": false, 00:11:06.090 "zone_management": false, 00:11:06.090 "zone_append": false, 00:11:06.090 "compare": false, 00:11:06.090 "compare_and_write": false, 00:11:06.090 "abort": true, 00:11:06.090 "seek_hole": false, 00:11:06.090 "seek_data": false, 00:11:06.090 "copy": true, 00:11:06.090 "nvme_iov_md": false 00:11:06.090 }, 00:11:06.090 "memory_domains": [ 00:11:06.090 { 00:11:06.090 "dma_device_id": "system", 00:11:06.090 "dma_device_type": 1 00:11:06.090 }, 00:11:06.090 { 00:11:06.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.090 "dma_device_type": 2 00:11:06.090 } 00:11:06.090 ], 00:11:06.090 "driver_specific": {} 00:11:06.090 } 00:11:06.090 ] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 BaseBdev3 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 [ 00:11:06.090 { 00:11:06.090 "name": "BaseBdev3", 00:11:06.090 "aliases": [ 00:11:06.090 "f1af5fcd-01d8-4567-a6f8-32d592a81a82" 00:11:06.090 ], 00:11:06.090 "product_name": "Malloc disk", 00:11:06.090 "block_size": 512, 00:11:06.090 "num_blocks": 65536, 00:11:06.090 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:06.090 "assigned_rate_limits": { 00:11:06.090 "rw_ios_per_sec": 0, 00:11:06.090 "rw_mbytes_per_sec": 0, 00:11:06.090 "r_mbytes_per_sec": 0, 00:11:06.090 "w_mbytes_per_sec": 0 00:11:06.090 }, 00:11:06.090 "claimed": false, 00:11:06.090 "zoned": false, 00:11:06.090 "supported_io_types": { 00:11:06.090 "read": true, 00:11:06.090 "write": true, 00:11:06.090 "unmap": true, 00:11:06.090 "flush": true, 00:11:06.090 "reset": true, 00:11:06.090 "nvme_admin": false, 00:11:06.090 "nvme_io": false, 00:11:06.090 "nvme_io_md": false, 00:11:06.090 "write_zeroes": true, 00:11:06.090 "zcopy": true, 00:11:06.090 "get_zone_info": false, 00:11:06.090 "zone_management": false, 00:11:06.090 "zone_append": false, 00:11:06.090 "compare": false, 00:11:06.090 "compare_and_write": false, 00:11:06.090 "abort": true, 00:11:06.090 "seek_hole": false, 00:11:06.090 "seek_data": false, 00:11:06.090 "copy": true, 00:11:06.090 "nvme_iov_md": false 00:11:06.090 }, 00:11:06.090 "memory_domains": [ 00:11:06.090 { 00:11:06.090 "dma_device_id": "system", 00:11:06.090 "dma_device_type": 1 00:11:06.090 }, 00:11:06.090 { 00:11:06.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.090 "dma_device_type": 2 00:11:06.090 } 00:11:06.090 ], 00:11:06.090 "driver_specific": {} 00:11:06.090 } 00:11:06.090 ] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.090 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 BaseBdev4 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 [ 00:11:06.351 { 00:11:06.351 "name": "BaseBdev4", 00:11:06.351 "aliases": [ 00:11:06.351 "a794c2d5-d881-42fd-9f55-2eda630fe818" 00:11:06.351 ], 00:11:06.351 "product_name": "Malloc disk", 00:11:06.351 "block_size": 512, 00:11:06.351 "num_blocks": 65536, 00:11:06.351 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:06.351 "assigned_rate_limits": { 00:11:06.351 "rw_ios_per_sec": 0, 00:11:06.351 "rw_mbytes_per_sec": 0, 00:11:06.351 "r_mbytes_per_sec": 0, 00:11:06.351 "w_mbytes_per_sec": 0 00:11:06.351 }, 00:11:06.351 "claimed": false, 00:11:06.351 "zoned": false, 00:11:06.351 "supported_io_types": { 00:11:06.351 "read": true, 00:11:06.351 "write": true, 00:11:06.351 "unmap": true, 00:11:06.351 "flush": true, 00:11:06.351 "reset": true, 00:11:06.351 "nvme_admin": false, 00:11:06.351 "nvme_io": false, 00:11:06.351 "nvme_io_md": false, 00:11:06.351 "write_zeroes": true, 00:11:06.351 "zcopy": true, 00:11:06.351 "get_zone_info": false, 00:11:06.351 "zone_management": false, 00:11:06.351 "zone_append": false, 00:11:06.351 "compare": false, 00:11:06.351 "compare_and_write": false, 00:11:06.351 "abort": true, 00:11:06.351 "seek_hole": false, 00:11:06.351 "seek_data": false, 00:11:06.351 "copy": true, 00:11:06.351 "nvme_iov_md": false 00:11:06.351 }, 00:11:06.351 "memory_domains": [ 00:11:06.351 { 00:11:06.351 "dma_device_id": "system", 00:11:06.351 "dma_device_type": 1 00:11:06.351 }, 00:11:06.351 { 00:11:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.351 "dma_device_type": 2 00:11:06.351 } 00:11:06.351 ], 00:11:06.351 "driver_specific": {} 00:11:06.351 } 00:11:06.351 ] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 [2024-11-28 18:51:35.752340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.351 [2024-11-28 18:51:35.752440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.351 [2024-11-28 18:51:35.752500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.351 [2024-11-28 18:51:35.754285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.351 [2024-11-28 18:51:35.754394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.351 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.351 "name": "Existed_Raid", 00:11:06.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.351 "strip_size_kb": 0, 00:11:06.351 "state": "configuring", 00:11:06.351 "raid_level": "raid1", 00:11:06.351 "superblock": false, 00:11:06.351 "num_base_bdevs": 4, 00:11:06.351 "num_base_bdevs_discovered": 3, 00:11:06.351 "num_base_bdevs_operational": 4, 00:11:06.351 "base_bdevs_list": [ 00:11:06.351 { 00:11:06.351 "name": "BaseBdev1", 00:11:06.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.351 "is_configured": false, 00:11:06.351 "data_offset": 0, 00:11:06.351 "data_size": 0 00:11:06.351 }, 00:11:06.351 { 00:11:06.351 "name": "BaseBdev2", 00:11:06.351 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:06.351 "is_configured": true, 00:11:06.351 "data_offset": 0, 00:11:06.351 "data_size": 65536 00:11:06.352 }, 00:11:06.352 { 00:11:06.352 "name": "BaseBdev3", 00:11:06.352 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:06.352 "is_configured": true, 00:11:06.352 "data_offset": 0, 00:11:06.352 "data_size": 65536 00:11:06.352 }, 00:11:06.352 { 00:11:06.352 "name": "BaseBdev4", 00:11:06.352 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:06.352 "is_configured": true, 00:11:06.352 "data_offset": 0, 00:11:06.352 "data_size": 65536 00:11:06.352 } 00:11:06.352 ] 00:11:06.352 }' 00:11:06.352 18:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.352 18:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.612 [2024-11-28 18:51:36.116386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.612 "name": "Existed_Raid", 00:11:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.612 "strip_size_kb": 0, 00:11:06.612 "state": "configuring", 00:11:06.612 "raid_level": "raid1", 00:11:06.612 "superblock": false, 00:11:06.612 "num_base_bdevs": 4, 00:11:06.612 "num_base_bdevs_discovered": 2, 00:11:06.612 "num_base_bdevs_operational": 4, 00:11:06.612 "base_bdevs_list": [ 00:11:06.612 { 00:11:06.612 "name": "BaseBdev1", 00:11:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.612 "is_configured": false, 00:11:06.612 "data_offset": 0, 00:11:06.612 "data_size": 0 00:11:06.612 }, 00:11:06.612 { 00:11:06.612 "name": null, 00:11:06.612 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:06.612 "is_configured": false, 00:11:06.612 "data_offset": 0, 00:11:06.612 "data_size": 65536 00:11:06.612 }, 00:11:06.612 { 00:11:06.612 "name": "BaseBdev3", 00:11:06.612 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:06.612 "is_configured": true, 00:11:06.612 "data_offset": 0, 00:11:06.612 "data_size": 65536 00:11:06.612 }, 00:11:06.612 { 00:11:06.612 "name": "BaseBdev4", 00:11:06.612 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:06.612 "is_configured": true, 00:11:06.612 "data_offset": 0, 00:11:06.612 "data_size": 65536 00:11:06.612 } 00:11:06.612 ] 00:11:06.612 }' 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.612 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 [2024-11-28 18:51:36.535350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.183 BaseBdev1 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.183 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 [ 00:11:07.183 { 00:11:07.184 "name": "BaseBdev1", 00:11:07.184 "aliases": [ 00:11:07.184 "0be7c7cf-7b5f-4a63-809e-fe8b136690b4" 00:11:07.184 ], 00:11:07.184 "product_name": "Malloc disk", 00:11:07.184 "block_size": 512, 00:11:07.184 "num_blocks": 65536, 00:11:07.184 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:07.184 "assigned_rate_limits": { 00:11:07.184 "rw_ios_per_sec": 0, 00:11:07.184 "rw_mbytes_per_sec": 0, 00:11:07.184 "r_mbytes_per_sec": 0, 00:11:07.184 "w_mbytes_per_sec": 0 00:11:07.184 }, 00:11:07.184 "claimed": true, 00:11:07.184 "claim_type": "exclusive_write", 00:11:07.184 "zoned": false, 00:11:07.184 "supported_io_types": { 00:11:07.184 "read": true, 00:11:07.184 "write": true, 00:11:07.184 "unmap": true, 00:11:07.184 "flush": true, 00:11:07.184 "reset": true, 00:11:07.184 "nvme_admin": false, 00:11:07.184 "nvme_io": false, 00:11:07.184 "nvme_io_md": false, 00:11:07.184 "write_zeroes": true, 00:11:07.184 "zcopy": true, 00:11:07.184 "get_zone_info": false, 00:11:07.184 "zone_management": false, 00:11:07.184 "zone_append": false, 00:11:07.184 "compare": false, 00:11:07.184 "compare_and_write": false, 00:11:07.184 "abort": true, 00:11:07.184 "seek_hole": false, 00:11:07.184 "seek_data": false, 00:11:07.184 "copy": true, 00:11:07.184 "nvme_iov_md": false 00:11:07.184 }, 00:11:07.184 "memory_domains": [ 00:11:07.184 { 00:11:07.184 "dma_device_id": "system", 00:11:07.184 "dma_device_type": 1 00:11:07.184 }, 00:11:07.184 { 00:11:07.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.184 "dma_device_type": 2 00:11:07.184 } 00:11:07.184 ], 00:11:07.184 "driver_specific": {} 00:11:07.184 } 00:11:07.184 ] 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.184 "name": "Existed_Raid", 00:11:07.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.184 "strip_size_kb": 0, 00:11:07.184 "state": "configuring", 00:11:07.184 "raid_level": "raid1", 00:11:07.184 "superblock": false, 00:11:07.184 "num_base_bdevs": 4, 00:11:07.184 "num_base_bdevs_discovered": 3, 00:11:07.184 "num_base_bdevs_operational": 4, 00:11:07.184 "base_bdevs_list": [ 00:11:07.184 { 00:11:07.184 "name": "BaseBdev1", 00:11:07.184 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:07.184 "is_configured": true, 00:11:07.184 "data_offset": 0, 00:11:07.184 "data_size": 65536 00:11:07.184 }, 00:11:07.184 { 00:11:07.184 "name": null, 00:11:07.184 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:07.184 "is_configured": false, 00:11:07.184 "data_offset": 0, 00:11:07.184 "data_size": 65536 00:11:07.184 }, 00:11:07.184 { 00:11:07.184 "name": "BaseBdev3", 00:11:07.184 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:07.184 "is_configured": true, 00:11:07.184 "data_offset": 0, 00:11:07.184 "data_size": 65536 00:11:07.184 }, 00:11:07.184 { 00:11:07.184 "name": "BaseBdev4", 00:11:07.184 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:07.184 "is_configured": true, 00:11:07.184 "data_offset": 0, 00:11:07.184 "data_size": 65536 00:11:07.184 } 00:11:07.184 ] 00:11:07.184 }' 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.184 18:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.444 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.444 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.444 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.704 [2024-11-28 18:51:37.055531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.704 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.704 "name": "Existed_Raid", 00:11:07.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.704 "strip_size_kb": 0, 00:11:07.704 "state": "configuring", 00:11:07.704 "raid_level": "raid1", 00:11:07.704 "superblock": false, 00:11:07.704 "num_base_bdevs": 4, 00:11:07.704 "num_base_bdevs_discovered": 2, 00:11:07.705 "num_base_bdevs_operational": 4, 00:11:07.705 "base_bdevs_list": [ 00:11:07.705 { 00:11:07.705 "name": "BaseBdev1", 00:11:07.705 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:07.705 "is_configured": true, 00:11:07.705 "data_offset": 0, 00:11:07.705 "data_size": 65536 00:11:07.705 }, 00:11:07.705 { 00:11:07.705 "name": null, 00:11:07.705 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:07.705 "is_configured": false, 00:11:07.705 "data_offset": 0, 00:11:07.705 "data_size": 65536 00:11:07.705 }, 00:11:07.705 { 00:11:07.705 "name": null, 00:11:07.705 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:07.705 "is_configured": false, 00:11:07.705 "data_offset": 0, 00:11:07.705 "data_size": 65536 00:11:07.705 }, 00:11:07.705 { 00:11:07.705 "name": "BaseBdev4", 00:11:07.705 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:07.705 "is_configured": true, 00:11:07.705 "data_offset": 0, 00:11:07.705 "data_size": 65536 00:11:07.705 } 00:11:07.705 ] 00:11:07.705 }' 00:11:07.705 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.705 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.965 [2024-11-28 18:51:37.491703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.965 "name": "Existed_Raid", 00:11:07.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.965 "strip_size_kb": 0, 00:11:07.965 "state": "configuring", 00:11:07.965 "raid_level": "raid1", 00:11:07.965 "superblock": false, 00:11:07.965 "num_base_bdevs": 4, 00:11:07.965 "num_base_bdevs_discovered": 3, 00:11:07.965 "num_base_bdevs_operational": 4, 00:11:07.965 "base_bdevs_list": [ 00:11:07.965 { 00:11:07.965 "name": "BaseBdev1", 00:11:07.965 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:07.965 "is_configured": true, 00:11:07.965 "data_offset": 0, 00:11:07.965 "data_size": 65536 00:11:07.965 }, 00:11:07.965 { 00:11:07.965 "name": null, 00:11:07.965 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:07.965 "is_configured": false, 00:11:07.965 "data_offset": 0, 00:11:07.965 "data_size": 65536 00:11:07.965 }, 00:11:07.965 { 00:11:07.965 "name": "BaseBdev3", 00:11:07.965 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:07.965 "is_configured": true, 00:11:07.965 "data_offset": 0, 00:11:07.965 "data_size": 65536 00:11:07.965 }, 00:11:07.965 { 00:11:07.965 "name": "BaseBdev4", 00:11:07.965 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:07.965 "is_configured": true, 00:11:07.965 "data_offset": 0, 00:11:07.965 "data_size": 65536 00:11:07.965 } 00:11:07.965 ] 00:11:07.965 }' 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.965 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.535 [2024-11-28 18:51:37.939856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.535 18:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.535 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.535 "name": "Existed_Raid", 00:11:08.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.535 "strip_size_kb": 0, 00:11:08.535 "state": "configuring", 00:11:08.535 "raid_level": "raid1", 00:11:08.535 "superblock": false, 00:11:08.535 "num_base_bdevs": 4, 00:11:08.535 "num_base_bdevs_discovered": 2, 00:11:08.536 "num_base_bdevs_operational": 4, 00:11:08.536 "base_bdevs_list": [ 00:11:08.536 { 00:11:08.536 "name": null, 00:11:08.536 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:08.536 "is_configured": false, 00:11:08.536 "data_offset": 0, 00:11:08.536 "data_size": 65536 00:11:08.536 }, 00:11:08.536 { 00:11:08.536 "name": null, 00:11:08.536 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:08.536 "is_configured": false, 00:11:08.536 "data_offset": 0, 00:11:08.536 "data_size": 65536 00:11:08.536 }, 00:11:08.536 { 00:11:08.536 "name": "BaseBdev3", 00:11:08.536 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:08.536 "is_configured": true, 00:11:08.536 "data_offset": 0, 00:11:08.536 "data_size": 65536 00:11:08.536 }, 00:11:08.536 { 00:11:08.536 "name": "BaseBdev4", 00:11:08.536 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:08.536 "is_configured": true, 00:11:08.536 "data_offset": 0, 00:11:08.536 "data_size": 65536 00:11:08.536 } 00:11:08.536 ] 00:11:08.536 }' 00:11:08.536 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.536 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.795 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.795 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.795 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.795 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.795 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.055 [2024-11-28 18:51:38.422543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.055 "name": "Existed_Raid", 00:11:09.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.055 "strip_size_kb": 0, 00:11:09.055 "state": "configuring", 00:11:09.055 "raid_level": "raid1", 00:11:09.055 "superblock": false, 00:11:09.055 "num_base_bdevs": 4, 00:11:09.055 "num_base_bdevs_discovered": 3, 00:11:09.055 "num_base_bdevs_operational": 4, 00:11:09.055 "base_bdevs_list": [ 00:11:09.055 { 00:11:09.055 "name": null, 00:11:09.055 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:09.055 "is_configured": false, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 65536 00:11:09.055 }, 00:11:09.055 { 00:11:09.055 "name": "BaseBdev2", 00:11:09.055 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:09.055 "is_configured": true, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 65536 00:11:09.055 }, 00:11:09.055 { 00:11:09.055 "name": "BaseBdev3", 00:11:09.055 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:09.055 "is_configured": true, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 65536 00:11:09.055 }, 00:11:09.055 { 00:11:09.055 "name": "BaseBdev4", 00:11:09.055 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:09.055 "is_configured": true, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 65536 00:11:09.055 } 00:11:09.055 ] 00:11:09.055 }' 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.055 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.314 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0be7c7cf-7b5f-4a63-809e-fe8b136690b4 00:11:09.574 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 [2024-11-28 18:51:38.941586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:09.574 [2024-11-28 18:51:38.941704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.574 [2024-11-28 18:51:38.941729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:09.574 [2024-11-28 18:51:38.941992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:09.574 [2024-11-28 18:51:38.942157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.574 [2024-11-28 18:51:38.942201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:09.575 [2024-11-28 18:51:38.942401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.575 NewBaseBdev 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 [ 00:11:09.575 { 00:11:09.575 "name": "NewBaseBdev", 00:11:09.575 "aliases": [ 00:11:09.575 "0be7c7cf-7b5f-4a63-809e-fe8b136690b4" 00:11:09.575 ], 00:11:09.575 "product_name": "Malloc disk", 00:11:09.575 "block_size": 512, 00:11:09.575 "num_blocks": 65536, 00:11:09.575 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:09.575 "assigned_rate_limits": { 00:11:09.575 "rw_ios_per_sec": 0, 00:11:09.575 "rw_mbytes_per_sec": 0, 00:11:09.575 "r_mbytes_per_sec": 0, 00:11:09.575 "w_mbytes_per_sec": 0 00:11:09.575 }, 00:11:09.575 "claimed": true, 00:11:09.575 "claim_type": "exclusive_write", 00:11:09.575 "zoned": false, 00:11:09.575 "supported_io_types": { 00:11:09.575 "read": true, 00:11:09.575 "write": true, 00:11:09.575 "unmap": true, 00:11:09.575 "flush": true, 00:11:09.575 "reset": true, 00:11:09.575 "nvme_admin": false, 00:11:09.575 "nvme_io": false, 00:11:09.575 "nvme_io_md": false, 00:11:09.575 "write_zeroes": true, 00:11:09.575 "zcopy": true, 00:11:09.575 "get_zone_info": false, 00:11:09.575 "zone_management": false, 00:11:09.575 "zone_append": false, 00:11:09.575 "compare": false, 00:11:09.575 "compare_and_write": false, 00:11:09.575 "abort": true, 00:11:09.575 "seek_hole": false, 00:11:09.575 "seek_data": false, 00:11:09.575 "copy": true, 00:11:09.575 "nvme_iov_md": false 00:11:09.575 }, 00:11:09.575 "memory_domains": [ 00:11:09.575 { 00:11:09.575 "dma_device_id": "system", 00:11:09.575 "dma_device_type": 1 00:11:09.575 }, 00:11:09.575 { 00:11:09.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.575 "dma_device_type": 2 00:11:09.575 } 00:11:09.575 ], 00:11:09.575 "driver_specific": {} 00:11:09.575 } 00:11:09.575 ] 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 18:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.575 "name": "Existed_Raid", 00:11:09.575 "uuid": "f8d4c41b-de4a-43c2-be62-8400d0576e39", 00:11:09.575 "strip_size_kb": 0, 00:11:09.575 "state": "online", 00:11:09.575 "raid_level": "raid1", 00:11:09.575 "superblock": false, 00:11:09.575 "num_base_bdevs": 4, 00:11:09.575 "num_base_bdevs_discovered": 4, 00:11:09.575 "num_base_bdevs_operational": 4, 00:11:09.575 "base_bdevs_list": [ 00:11:09.575 { 00:11:09.575 "name": "NewBaseBdev", 00:11:09.575 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:09.575 "is_configured": true, 00:11:09.575 "data_offset": 0, 00:11:09.575 "data_size": 65536 00:11:09.575 }, 00:11:09.575 { 00:11:09.575 "name": "BaseBdev2", 00:11:09.575 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:09.575 "is_configured": true, 00:11:09.575 "data_offset": 0, 00:11:09.575 "data_size": 65536 00:11:09.575 }, 00:11:09.575 { 00:11:09.575 "name": "BaseBdev3", 00:11:09.575 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:09.575 "is_configured": true, 00:11:09.575 "data_offset": 0, 00:11:09.575 "data_size": 65536 00:11:09.575 }, 00:11:09.575 { 00:11:09.575 "name": "BaseBdev4", 00:11:09.575 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:09.575 "is_configured": true, 00:11:09.575 "data_offset": 0, 00:11:09.575 "data_size": 65536 00:11:09.575 } 00:11:09.575 ] 00:11:09.575 }' 00:11:09.575 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.575 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.834 [2024-11-28 18:51:39.386022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.834 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.834 "name": "Existed_Raid", 00:11:09.834 "aliases": [ 00:11:09.834 "f8d4c41b-de4a-43c2-be62-8400d0576e39" 00:11:09.834 ], 00:11:09.834 "product_name": "Raid Volume", 00:11:09.834 "block_size": 512, 00:11:09.834 "num_blocks": 65536, 00:11:09.835 "uuid": "f8d4c41b-de4a-43c2-be62-8400d0576e39", 00:11:09.835 "assigned_rate_limits": { 00:11:09.835 "rw_ios_per_sec": 0, 00:11:09.835 "rw_mbytes_per_sec": 0, 00:11:09.835 "r_mbytes_per_sec": 0, 00:11:09.835 "w_mbytes_per_sec": 0 00:11:09.835 }, 00:11:09.835 "claimed": false, 00:11:09.835 "zoned": false, 00:11:09.835 "supported_io_types": { 00:11:09.835 "read": true, 00:11:09.835 "write": true, 00:11:09.835 "unmap": false, 00:11:09.835 "flush": false, 00:11:09.835 "reset": true, 00:11:09.835 "nvme_admin": false, 00:11:09.835 "nvme_io": false, 00:11:09.835 "nvme_io_md": false, 00:11:09.835 "write_zeroes": true, 00:11:09.835 "zcopy": false, 00:11:09.835 "get_zone_info": false, 00:11:09.835 "zone_management": false, 00:11:09.835 "zone_append": false, 00:11:09.835 "compare": false, 00:11:09.835 "compare_and_write": false, 00:11:09.835 "abort": false, 00:11:09.835 "seek_hole": false, 00:11:09.835 "seek_data": false, 00:11:09.835 "copy": false, 00:11:09.835 "nvme_iov_md": false 00:11:09.835 }, 00:11:09.835 "memory_domains": [ 00:11:09.835 { 00:11:09.835 "dma_device_id": "system", 00:11:09.835 "dma_device_type": 1 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.835 "dma_device_type": 2 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "system", 00:11:09.835 "dma_device_type": 1 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.835 "dma_device_type": 2 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "system", 00:11:09.835 "dma_device_type": 1 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.835 "dma_device_type": 2 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "system", 00:11:09.835 "dma_device_type": 1 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.835 "dma_device_type": 2 00:11:09.835 } 00:11:09.835 ], 00:11:09.835 "driver_specific": { 00:11:09.835 "raid": { 00:11:09.835 "uuid": "f8d4c41b-de4a-43c2-be62-8400d0576e39", 00:11:09.835 "strip_size_kb": 0, 00:11:09.835 "state": "online", 00:11:09.835 "raid_level": "raid1", 00:11:09.835 "superblock": false, 00:11:09.835 "num_base_bdevs": 4, 00:11:09.835 "num_base_bdevs_discovered": 4, 00:11:09.835 "num_base_bdevs_operational": 4, 00:11:09.835 "base_bdevs_list": [ 00:11:09.835 { 00:11:09.835 "name": "NewBaseBdev", 00:11:09.835 "uuid": "0be7c7cf-7b5f-4a63-809e-fe8b136690b4", 00:11:09.835 "is_configured": true, 00:11:09.835 "data_offset": 0, 00:11:09.835 "data_size": 65536 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "name": "BaseBdev2", 00:11:09.835 "uuid": "e8d4b2fc-429e-45c8-b0e9-026f9fc591f5", 00:11:09.835 "is_configured": true, 00:11:09.835 "data_offset": 0, 00:11:09.835 "data_size": 65536 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "name": "BaseBdev3", 00:11:09.835 "uuid": "f1af5fcd-01d8-4567-a6f8-32d592a81a82", 00:11:09.835 "is_configured": true, 00:11:09.835 "data_offset": 0, 00:11:09.835 "data_size": 65536 00:11:09.835 }, 00:11:09.835 { 00:11:09.835 "name": "BaseBdev4", 00:11:09.835 "uuid": "a794c2d5-d881-42fd-9f55-2eda630fe818", 00:11:09.835 "is_configured": true, 00:11:09.835 "data_offset": 0, 00:11:09.835 "data_size": 65536 00:11:09.835 } 00:11:09.835 ] 00:11:09.835 } 00:11:09.835 } 00:11:09.835 }' 00:11:09.835 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.094 BaseBdev2 00:11:10.094 BaseBdev3 00:11:10.094 BaseBdev4' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.094 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.353 [2024-11-28 18:51:39.705828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.353 [2024-11-28 18:51:39.705856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.353 [2024-11-28 18:51:39.705922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.353 [2024-11-28 18:51:39.706171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.353 [2024-11-28 18:51:39.706185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85475 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 85475 ']' 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 85475 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85475 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.353 killing process with pid 85475 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85475' 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 85475 00:11:10.353 [2024-11-28 18:51:39.750039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.353 18:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 85475 00:11:10.353 [2024-11-28 18:51:39.790160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.613 00:11:10.613 real 0m9.098s 00:11:10.613 user 0m15.540s 00:11:10.613 sys 0m1.853s 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.613 ************************************ 00:11:10.613 END TEST raid_state_function_test 00:11:10.613 ************************************ 00:11:10.613 18:51:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:10.613 18:51:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:10.613 18:51:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.613 18:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.613 ************************************ 00:11:10.613 START TEST raid_state_function_test_sb 00:11:10.613 ************************************ 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:10.613 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86125 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.614 Process raid pid: 86125 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86125' 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86125 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86125 ']' 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.614 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 [2024-11-28 18:51:40.173197] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:10.614 [2024-11-28 18:51:40.173333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.873 [2024-11-28 18:51:40.308653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:10.873 [2024-11-28 18:51:40.343758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.873 [2024-11-28 18:51:40.368499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.873 [2024-11-28 18:51:40.410642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.873 [2024-11-28 18:51:40.410698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.444 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.444 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:11.444 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.444 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.444 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.444 [2024-11-28 18:51:40.998102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.444 [2024-11-28 18:51:40.998155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.444 [2024-11-28 18:51:40.998166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.444 [2024-11-28 18:51:40.998174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.444 [2024-11-28 18:51:40.998183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.444 [2024-11-28 18:51:40.998190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.444 [2024-11-28 18:51:40.998199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.444 [2024-11-28 18:51:40.998206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.444 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.715 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.715 "name": "Existed_Raid", 00:11:11.715 "uuid": "c1e5eb15-3de4-481e-9a7b-60cca0bdaf6b", 00:11:11.715 "strip_size_kb": 0, 00:11:11.715 "state": "configuring", 00:11:11.715 "raid_level": "raid1", 00:11:11.715 "superblock": true, 00:11:11.715 "num_base_bdevs": 4, 00:11:11.715 "num_base_bdevs_discovered": 0, 00:11:11.715 "num_base_bdevs_operational": 4, 00:11:11.715 "base_bdevs_list": [ 00:11:11.715 { 00:11:11.715 "name": "BaseBdev1", 00:11:11.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.715 "is_configured": false, 00:11:11.715 "data_offset": 0, 00:11:11.715 "data_size": 0 00:11:11.715 }, 00:11:11.715 { 00:11:11.715 "name": "BaseBdev2", 00:11:11.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.715 "is_configured": false, 00:11:11.715 "data_offset": 0, 00:11:11.715 "data_size": 0 00:11:11.715 }, 00:11:11.715 { 00:11:11.715 "name": "BaseBdev3", 00:11:11.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.715 "is_configured": false, 00:11:11.715 "data_offset": 0, 00:11:11.715 "data_size": 0 00:11:11.715 }, 00:11:11.715 { 00:11:11.715 "name": "BaseBdev4", 00:11:11.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.715 "is_configured": false, 00:11:11.715 "data_offset": 0, 00:11:11.715 "data_size": 0 00:11:11.715 } 00:11:11.715 ] 00:11:11.715 }' 00:11:11.715 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.715 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 [2024-11-28 18:51:41.466110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.992 [2024-11-28 18:51:41.466149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 [2024-11-28 18:51:41.474146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.992 [2024-11-28 18:51:41.474182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.992 [2024-11-28 18:51:41.474208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.992 [2024-11-28 18:51:41.474216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.992 [2024-11-28 18:51:41.474223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.992 [2024-11-28 18:51:41.474230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.992 [2024-11-28 18:51:41.474238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.992 [2024-11-28 18:51:41.474244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 [2024-11-28 18:51:41.491063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.992 BaseBdev1 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.992 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.993 [ 00:11:11.993 { 00:11:11.993 "name": "BaseBdev1", 00:11:11.993 "aliases": [ 00:11:11.993 "d249006d-741e-48bc-89e3-ec9536bb3776" 00:11:11.993 ], 00:11:11.993 "product_name": "Malloc disk", 00:11:11.993 "block_size": 512, 00:11:11.993 "num_blocks": 65536, 00:11:11.993 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:11.993 "assigned_rate_limits": { 00:11:11.993 "rw_ios_per_sec": 0, 00:11:11.993 "rw_mbytes_per_sec": 0, 00:11:11.993 "r_mbytes_per_sec": 0, 00:11:11.993 "w_mbytes_per_sec": 0 00:11:11.993 }, 00:11:11.993 "claimed": true, 00:11:11.993 "claim_type": "exclusive_write", 00:11:11.993 "zoned": false, 00:11:11.993 "supported_io_types": { 00:11:11.993 "read": true, 00:11:11.993 "write": true, 00:11:11.993 "unmap": true, 00:11:11.993 "flush": true, 00:11:11.993 "reset": true, 00:11:11.993 "nvme_admin": false, 00:11:11.993 "nvme_io": false, 00:11:11.993 "nvme_io_md": false, 00:11:11.993 "write_zeroes": true, 00:11:11.993 "zcopy": true, 00:11:11.993 "get_zone_info": false, 00:11:11.993 "zone_management": false, 00:11:11.993 "zone_append": false, 00:11:11.993 "compare": false, 00:11:11.993 "compare_and_write": false, 00:11:11.993 "abort": true, 00:11:11.993 "seek_hole": false, 00:11:11.993 "seek_data": false, 00:11:11.993 "copy": true, 00:11:11.993 "nvme_iov_md": false 00:11:11.993 }, 00:11:11.993 "memory_domains": [ 00:11:11.993 { 00:11:11.993 "dma_device_id": "system", 00:11:11.993 "dma_device_type": 1 00:11:11.993 }, 00:11:11.993 { 00:11:11.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.993 "dma_device_type": 2 00:11:11.993 } 00:11:11.993 ], 00:11:11.993 "driver_specific": {} 00:11:11.993 } 00:11:11.993 ] 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.993 "name": "Existed_Raid", 00:11:11.993 "uuid": "dbb5f661-b1d9-4d80-ab9d-7cfa8f2ea3a6", 00:11:11.993 "strip_size_kb": 0, 00:11:11.993 "state": "configuring", 00:11:11.993 "raid_level": "raid1", 00:11:11.993 "superblock": true, 00:11:11.993 "num_base_bdevs": 4, 00:11:11.993 "num_base_bdevs_discovered": 1, 00:11:11.993 "num_base_bdevs_operational": 4, 00:11:11.993 "base_bdevs_list": [ 00:11:11.993 { 00:11:11.993 "name": "BaseBdev1", 00:11:11.993 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:11.993 "is_configured": true, 00:11:11.993 "data_offset": 2048, 00:11:11.993 "data_size": 63488 00:11:11.993 }, 00:11:11.993 { 00:11:11.993 "name": "BaseBdev2", 00:11:11.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.993 "is_configured": false, 00:11:11.993 "data_offset": 0, 00:11:11.993 "data_size": 0 00:11:11.993 }, 00:11:11.993 { 00:11:11.993 "name": "BaseBdev3", 00:11:11.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.993 "is_configured": false, 00:11:11.993 "data_offset": 0, 00:11:11.993 "data_size": 0 00:11:11.993 }, 00:11:11.993 { 00:11:11.993 "name": "BaseBdev4", 00:11:11.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.993 "is_configured": false, 00:11:11.993 "data_offset": 0, 00:11:11.993 "data_size": 0 00:11:11.993 } 00:11:11.993 ] 00:11:11.993 }' 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.993 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.561 [2024-11-28 18:51:41.951219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.561 [2024-11-28 18:51:41.951274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.561 [2024-11-28 18:51:41.959263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.561 [2024-11-28 18:51:41.961126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.561 [2024-11-28 18:51:41.961163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.561 [2024-11-28 18:51:41.961174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.561 [2024-11-28 18:51:41.961181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.561 [2024-11-28 18:51:41.961188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.561 [2024-11-28 18:51:41.961195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.561 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.561 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.561 "name": "Existed_Raid", 00:11:12.561 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:12.561 "strip_size_kb": 0, 00:11:12.561 "state": "configuring", 00:11:12.561 "raid_level": "raid1", 00:11:12.561 "superblock": true, 00:11:12.561 "num_base_bdevs": 4, 00:11:12.561 "num_base_bdevs_discovered": 1, 00:11:12.561 "num_base_bdevs_operational": 4, 00:11:12.561 "base_bdevs_list": [ 00:11:12.561 { 00:11:12.561 "name": "BaseBdev1", 00:11:12.561 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:12.561 "is_configured": true, 00:11:12.561 "data_offset": 2048, 00:11:12.561 "data_size": 63488 00:11:12.561 }, 00:11:12.561 { 00:11:12.561 "name": "BaseBdev2", 00:11:12.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.561 "is_configured": false, 00:11:12.561 "data_offset": 0, 00:11:12.561 "data_size": 0 00:11:12.561 }, 00:11:12.561 { 00:11:12.561 "name": "BaseBdev3", 00:11:12.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.561 "is_configured": false, 00:11:12.561 "data_offset": 0, 00:11:12.561 "data_size": 0 00:11:12.561 }, 00:11:12.561 { 00:11:12.561 "name": "BaseBdev4", 00:11:12.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.561 "is_configured": false, 00:11:12.561 "data_offset": 0, 00:11:12.561 "data_size": 0 00:11:12.561 } 00:11:12.561 ] 00:11:12.561 }' 00:11:12.561 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.561 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.821 [2024-11-28 18:51:42.382292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.821 BaseBdev2 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.821 [ 00:11:12.821 { 00:11:12.821 "name": "BaseBdev2", 00:11:12.821 "aliases": [ 00:11:12.821 "4760d007-322d-4950-ba4a-aa88068a475c" 00:11:12.821 ], 00:11:12.821 "product_name": "Malloc disk", 00:11:12.821 "block_size": 512, 00:11:12.821 "num_blocks": 65536, 00:11:12.821 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:12.821 "assigned_rate_limits": { 00:11:12.821 "rw_ios_per_sec": 0, 00:11:12.821 "rw_mbytes_per_sec": 0, 00:11:12.821 "r_mbytes_per_sec": 0, 00:11:12.821 "w_mbytes_per_sec": 0 00:11:12.821 }, 00:11:12.821 "claimed": true, 00:11:12.821 "claim_type": "exclusive_write", 00:11:12.821 "zoned": false, 00:11:12.821 "supported_io_types": { 00:11:12.821 "read": true, 00:11:12.821 "write": true, 00:11:12.821 "unmap": true, 00:11:12.821 "flush": true, 00:11:12.821 "reset": true, 00:11:12.821 "nvme_admin": false, 00:11:12.821 "nvme_io": false, 00:11:12.821 "nvme_io_md": false, 00:11:12.821 "write_zeroes": true, 00:11:12.821 "zcopy": true, 00:11:12.821 "get_zone_info": false, 00:11:12.821 "zone_management": false, 00:11:12.821 "zone_append": false, 00:11:12.821 "compare": false, 00:11:12.821 "compare_and_write": false, 00:11:12.821 "abort": true, 00:11:12.821 "seek_hole": false, 00:11:12.821 "seek_data": false, 00:11:12.821 "copy": true, 00:11:12.821 "nvme_iov_md": false 00:11:12.821 }, 00:11:12.821 "memory_domains": [ 00:11:12.821 { 00:11:12.821 "dma_device_id": "system", 00:11:12.821 "dma_device_type": 1 00:11:12.821 }, 00:11:12.821 { 00:11:12.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.821 "dma_device_type": 2 00:11:12.821 } 00:11:12.821 ], 00:11:12.821 "driver_specific": {} 00:11:12.821 } 00:11:12.821 ] 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.821 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.822 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.080 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.080 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.080 "name": "Existed_Raid", 00:11:13.080 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:13.080 "strip_size_kb": 0, 00:11:13.080 "state": "configuring", 00:11:13.080 "raid_level": "raid1", 00:11:13.080 "superblock": true, 00:11:13.080 "num_base_bdevs": 4, 00:11:13.080 "num_base_bdevs_discovered": 2, 00:11:13.080 "num_base_bdevs_operational": 4, 00:11:13.080 "base_bdevs_list": [ 00:11:13.080 { 00:11:13.080 "name": "BaseBdev1", 00:11:13.080 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:13.080 "is_configured": true, 00:11:13.080 "data_offset": 2048, 00:11:13.080 "data_size": 63488 00:11:13.080 }, 00:11:13.080 { 00:11:13.080 "name": "BaseBdev2", 00:11:13.080 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:13.080 "is_configured": true, 00:11:13.080 "data_offset": 2048, 00:11:13.080 "data_size": 63488 00:11:13.080 }, 00:11:13.080 { 00:11:13.080 "name": "BaseBdev3", 00:11:13.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.080 "is_configured": false, 00:11:13.080 "data_offset": 0, 00:11:13.080 "data_size": 0 00:11:13.080 }, 00:11:13.080 { 00:11:13.080 "name": "BaseBdev4", 00:11:13.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.080 "is_configured": false, 00:11:13.080 "data_offset": 0, 00:11:13.080 "data_size": 0 00:11:13.080 } 00:11:13.080 ] 00:11:13.080 }' 00:11:13.080 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.080 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.340 [2024-11-28 18:51:42.844875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.340 BaseBdev3 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.340 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.340 [ 00:11:13.340 { 00:11:13.340 "name": "BaseBdev3", 00:11:13.340 "aliases": [ 00:11:13.340 "e8253631-819b-40a1-a467-7644f774ce27" 00:11:13.340 ], 00:11:13.340 "product_name": "Malloc disk", 00:11:13.340 "block_size": 512, 00:11:13.340 "num_blocks": 65536, 00:11:13.340 "uuid": "e8253631-819b-40a1-a467-7644f774ce27", 00:11:13.340 "assigned_rate_limits": { 00:11:13.340 "rw_ios_per_sec": 0, 00:11:13.340 "rw_mbytes_per_sec": 0, 00:11:13.340 "r_mbytes_per_sec": 0, 00:11:13.340 "w_mbytes_per_sec": 0 00:11:13.340 }, 00:11:13.340 "claimed": true, 00:11:13.340 "claim_type": "exclusive_write", 00:11:13.340 "zoned": false, 00:11:13.340 "supported_io_types": { 00:11:13.340 "read": true, 00:11:13.340 "write": true, 00:11:13.340 "unmap": true, 00:11:13.340 "flush": true, 00:11:13.340 "reset": true, 00:11:13.340 "nvme_admin": false, 00:11:13.340 "nvme_io": false, 00:11:13.340 "nvme_io_md": false, 00:11:13.340 "write_zeroes": true, 00:11:13.340 "zcopy": true, 00:11:13.340 "get_zone_info": false, 00:11:13.340 "zone_management": false, 00:11:13.340 "zone_append": false, 00:11:13.340 "compare": false, 00:11:13.340 "compare_and_write": false, 00:11:13.340 "abort": true, 00:11:13.340 "seek_hole": false, 00:11:13.340 "seek_data": false, 00:11:13.340 "copy": true, 00:11:13.340 "nvme_iov_md": false 00:11:13.340 }, 00:11:13.341 "memory_domains": [ 00:11:13.341 { 00:11:13.341 "dma_device_id": "system", 00:11:13.341 "dma_device_type": 1 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.341 "dma_device_type": 2 00:11:13.341 } 00:11:13.341 ], 00:11:13.341 "driver_specific": {} 00:11:13.341 } 00:11:13.341 ] 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.341 "name": "Existed_Raid", 00:11:13.341 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:13.341 "strip_size_kb": 0, 00:11:13.341 "state": "configuring", 00:11:13.341 "raid_level": "raid1", 00:11:13.341 "superblock": true, 00:11:13.341 "num_base_bdevs": 4, 00:11:13.341 "num_base_bdevs_discovered": 3, 00:11:13.341 "num_base_bdevs_operational": 4, 00:11:13.341 "base_bdevs_list": [ 00:11:13.341 { 00:11:13.341 "name": "BaseBdev1", 00:11:13.341 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 2048, 00:11:13.341 "data_size": 63488 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "name": "BaseBdev2", 00:11:13.341 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 2048, 00:11:13.341 "data_size": 63488 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "name": "BaseBdev3", 00:11:13.341 "uuid": "e8253631-819b-40a1-a467-7644f774ce27", 00:11:13.341 "is_configured": true, 00:11:13.341 "data_offset": 2048, 00:11:13.341 "data_size": 63488 00:11:13.341 }, 00:11:13.341 { 00:11:13.341 "name": "BaseBdev4", 00:11:13.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.341 "is_configured": false, 00:11:13.341 "data_offset": 0, 00:11:13.341 "data_size": 0 00:11:13.341 } 00:11:13.341 ] 00:11:13.341 }' 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.341 18:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.911 [2024-11-28 18:51:43.315979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.911 [2024-11-28 18:51:43.316191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:13.911 [2024-11-28 18:51:43.316214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.911 BaseBdev4 00:11:13.911 [2024-11-28 18:51:43.316530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:13.911 [2024-11-28 18:51:43.316696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:13.911 [2024-11-28 18:51:43.316714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:13.911 [2024-11-28 18:51:43.316846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.911 [ 00:11:13.911 { 00:11:13.911 "name": "BaseBdev4", 00:11:13.911 "aliases": [ 00:11:13.911 "e5222cbe-c6c5-49d8-86ac-dcf92e6d6595" 00:11:13.911 ], 00:11:13.911 "product_name": "Malloc disk", 00:11:13.911 "block_size": 512, 00:11:13.911 "num_blocks": 65536, 00:11:13.911 "uuid": "e5222cbe-c6c5-49d8-86ac-dcf92e6d6595", 00:11:13.911 "assigned_rate_limits": { 00:11:13.911 "rw_ios_per_sec": 0, 00:11:13.911 "rw_mbytes_per_sec": 0, 00:11:13.911 "r_mbytes_per_sec": 0, 00:11:13.911 "w_mbytes_per_sec": 0 00:11:13.911 }, 00:11:13.911 "claimed": true, 00:11:13.911 "claim_type": "exclusive_write", 00:11:13.911 "zoned": false, 00:11:13.911 "supported_io_types": { 00:11:13.911 "read": true, 00:11:13.911 "write": true, 00:11:13.911 "unmap": true, 00:11:13.911 "flush": true, 00:11:13.911 "reset": true, 00:11:13.911 "nvme_admin": false, 00:11:13.911 "nvme_io": false, 00:11:13.911 "nvme_io_md": false, 00:11:13.911 "write_zeroes": true, 00:11:13.911 "zcopy": true, 00:11:13.911 "get_zone_info": false, 00:11:13.911 "zone_management": false, 00:11:13.911 "zone_append": false, 00:11:13.911 "compare": false, 00:11:13.911 "compare_and_write": false, 00:11:13.911 "abort": true, 00:11:13.911 "seek_hole": false, 00:11:13.911 "seek_data": false, 00:11:13.911 "copy": true, 00:11:13.911 "nvme_iov_md": false 00:11:13.911 }, 00:11:13.911 "memory_domains": [ 00:11:13.911 { 00:11:13.911 "dma_device_id": "system", 00:11:13.911 "dma_device_type": 1 00:11:13.911 }, 00:11:13.911 { 00:11:13.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.911 "dma_device_type": 2 00:11:13.911 } 00:11:13.911 ], 00:11:13.911 "driver_specific": {} 00:11:13.911 } 00:11:13.911 ] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.911 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.911 "name": "Existed_Raid", 00:11:13.911 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:13.911 "strip_size_kb": 0, 00:11:13.911 "state": "online", 00:11:13.911 "raid_level": "raid1", 00:11:13.911 "superblock": true, 00:11:13.911 "num_base_bdevs": 4, 00:11:13.911 "num_base_bdevs_discovered": 4, 00:11:13.911 "num_base_bdevs_operational": 4, 00:11:13.911 "base_bdevs_list": [ 00:11:13.911 { 00:11:13.911 "name": "BaseBdev1", 00:11:13.911 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:13.911 "is_configured": true, 00:11:13.911 "data_offset": 2048, 00:11:13.911 "data_size": 63488 00:11:13.911 }, 00:11:13.911 { 00:11:13.911 "name": "BaseBdev2", 00:11:13.911 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:13.911 "is_configured": true, 00:11:13.911 "data_offset": 2048, 00:11:13.911 "data_size": 63488 00:11:13.911 }, 00:11:13.911 { 00:11:13.911 "name": "BaseBdev3", 00:11:13.911 "uuid": "e8253631-819b-40a1-a467-7644f774ce27", 00:11:13.911 "is_configured": true, 00:11:13.911 "data_offset": 2048, 00:11:13.911 "data_size": 63488 00:11:13.911 }, 00:11:13.911 { 00:11:13.911 "name": "BaseBdev4", 00:11:13.912 "uuid": "e5222cbe-c6c5-49d8-86ac-dcf92e6d6595", 00:11:13.912 "is_configured": true, 00:11:13.912 "data_offset": 2048, 00:11:13.912 "data_size": 63488 00:11:13.912 } 00:11:13.912 ] 00:11:13.912 }' 00:11:13.912 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.912 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.171 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.171 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.171 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.171 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.171 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.172 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.172 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.172 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.172 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.172 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.431 [2024-11-28 18:51:43.780454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.431 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.431 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.431 "name": "Existed_Raid", 00:11:14.431 "aliases": [ 00:11:14.431 "3eb347a8-6346-449d-bf6f-fb31403c7efb" 00:11:14.431 ], 00:11:14.431 "product_name": "Raid Volume", 00:11:14.431 "block_size": 512, 00:11:14.431 "num_blocks": 63488, 00:11:14.431 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:14.431 "assigned_rate_limits": { 00:11:14.431 "rw_ios_per_sec": 0, 00:11:14.431 "rw_mbytes_per_sec": 0, 00:11:14.431 "r_mbytes_per_sec": 0, 00:11:14.431 "w_mbytes_per_sec": 0 00:11:14.431 }, 00:11:14.431 "claimed": false, 00:11:14.431 "zoned": false, 00:11:14.431 "supported_io_types": { 00:11:14.431 "read": true, 00:11:14.431 "write": true, 00:11:14.431 "unmap": false, 00:11:14.431 "flush": false, 00:11:14.431 "reset": true, 00:11:14.431 "nvme_admin": false, 00:11:14.431 "nvme_io": false, 00:11:14.431 "nvme_io_md": false, 00:11:14.431 "write_zeroes": true, 00:11:14.431 "zcopy": false, 00:11:14.431 "get_zone_info": false, 00:11:14.431 "zone_management": false, 00:11:14.431 "zone_append": false, 00:11:14.431 "compare": false, 00:11:14.432 "compare_and_write": false, 00:11:14.432 "abort": false, 00:11:14.432 "seek_hole": false, 00:11:14.432 "seek_data": false, 00:11:14.432 "copy": false, 00:11:14.432 "nvme_iov_md": false 00:11:14.432 }, 00:11:14.432 "memory_domains": [ 00:11:14.432 { 00:11:14.432 "dma_device_id": "system", 00:11:14.432 "dma_device_type": 1 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.432 "dma_device_type": 2 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "system", 00:11:14.432 "dma_device_type": 1 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.432 "dma_device_type": 2 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "system", 00:11:14.432 "dma_device_type": 1 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.432 "dma_device_type": 2 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "system", 00:11:14.432 "dma_device_type": 1 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.432 "dma_device_type": 2 00:11:14.432 } 00:11:14.432 ], 00:11:14.432 "driver_specific": { 00:11:14.432 "raid": { 00:11:14.432 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:14.432 "strip_size_kb": 0, 00:11:14.432 "state": "online", 00:11:14.432 "raid_level": "raid1", 00:11:14.432 "superblock": true, 00:11:14.432 "num_base_bdevs": 4, 00:11:14.432 "num_base_bdevs_discovered": 4, 00:11:14.432 "num_base_bdevs_operational": 4, 00:11:14.432 "base_bdevs_list": [ 00:11:14.432 { 00:11:14.432 "name": "BaseBdev1", 00:11:14.432 "uuid": "d249006d-741e-48bc-89e3-ec9536bb3776", 00:11:14.432 "is_configured": true, 00:11:14.432 "data_offset": 2048, 00:11:14.432 "data_size": 63488 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "name": "BaseBdev2", 00:11:14.432 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:14.432 "is_configured": true, 00:11:14.432 "data_offset": 2048, 00:11:14.432 "data_size": 63488 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "name": "BaseBdev3", 00:11:14.432 "uuid": "e8253631-819b-40a1-a467-7644f774ce27", 00:11:14.432 "is_configured": true, 00:11:14.432 "data_offset": 2048, 00:11:14.432 "data_size": 63488 00:11:14.432 }, 00:11:14.432 { 00:11:14.432 "name": "BaseBdev4", 00:11:14.432 "uuid": "e5222cbe-c6c5-49d8-86ac-dcf92e6d6595", 00:11:14.432 "is_configured": true, 00:11:14.432 "data_offset": 2048, 00:11:14.432 "data_size": 63488 00:11:14.432 } 00:11:14.432 ] 00:11:14.432 } 00:11:14.432 } 00:11:14.432 }' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:14.432 BaseBdev2 00:11:14.432 BaseBdev3 00:11:14.432 BaseBdev4' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.432 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.432 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.692 [2024-11-28 18:51:44.044241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:14.692 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.693 "name": "Existed_Raid", 00:11:14.693 "uuid": "3eb347a8-6346-449d-bf6f-fb31403c7efb", 00:11:14.693 "strip_size_kb": 0, 00:11:14.693 "state": "online", 00:11:14.693 "raid_level": "raid1", 00:11:14.693 "superblock": true, 00:11:14.693 "num_base_bdevs": 4, 00:11:14.693 "num_base_bdevs_discovered": 3, 00:11:14.693 "num_base_bdevs_operational": 3, 00:11:14.693 "base_bdevs_list": [ 00:11:14.693 { 00:11:14.693 "name": null, 00:11:14.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.693 "is_configured": false, 00:11:14.693 "data_offset": 0, 00:11:14.693 "data_size": 63488 00:11:14.693 }, 00:11:14.693 { 00:11:14.693 "name": "BaseBdev2", 00:11:14.693 "uuid": "4760d007-322d-4950-ba4a-aa88068a475c", 00:11:14.693 "is_configured": true, 00:11:14.693 "data_offset": 2048, 00:11:14.693 "data_size": 63488 00:11:14.693 }, 00:11:14.693 { 00:11:14.693 "name": "BaseBdev3", 00:11:14.693 "uuid": "e8253631-819b-40a1-a467-7644f774ce27", 00:11:14.693 "is_configured": true, 00:11:14.693 "data_offset": 2048, 00:11:14.693 "data_size": 63488 00:11:14.693 }, 00:11:14.693 { 00:11:14.693 "name": "BaseBdev4", 00:11:14.693 "uuid": "e5222cbe-c6c5-49d8-86ac-dcf92e6d6595", 00:11:14.693 "is_configured": true, 00:11:14.693 "data_offset": 2048, 00:11:14.693 "data_size": 63488 00:11:14.693 } 00:11:14.693 ] 00:11:14.693 }' 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.693 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.952 [2024-11-28 18:51:44.527849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.952 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.953 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.953 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 [2024-11-28 18:51:44.578984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 [2024-11-28 18:51:44.630214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:15.212 [2024-11-28 18:51:44.630321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.212 [2024-11-28 18:51:44.642087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.212 [2024-11-28 18:51:44.642142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.212 [2024-11-28 18:51:44.642151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:15.212 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 BaseBdev2 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 [ 00:11:15.213 { 00:11:15.213 "name": "BaseBdev2", 00:11:15.213 "aliases": [ 00:11:15.213 "46146cd5-d75b-4966-bf27-fca2d4a5ae95" 00:11:15.213 ], 00:11:15.213 "product_name": "Malloc disk", 00:11:15.213 "block_size": 512, 00:11:15.213 "num_blocks": 65536, 00:11:15.213 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:15.213 "assigned_rate_limits": { 00:11:15.213 "rw_ios_per_sec": 0, 00:11:15.213 "rw_mbytes_per_sec": 0, 00:11:15.213 "r_mbytes_per_sec": 0, 00:11:15.213 "w_mbytes_per_sec": 0 00:11:15.213 }, 00:11:15.213 "claimed": false, 00:11:15.213 "zoned": false, 00:11:15.213 "supported_io_types": { 00:11:15.213 "read": true, 00:11:15.213 "write": true, 00:11:15.213 "unmap": true, 00:11:15.213 "flush": true, 00:11:15.213 "reset": true, 00:11:15.213 "nvme_admin": false, 00:11:15.213 "nvme_io": false, 00:11:15.213 "nvme_io_md": false, 00:11:15.213 "write_zeroes": true, 00:11:15.213 "zcopy": true, 00:11:15.213 "get_zone_info": false, 00:11:15.213 "zone_management": false, 00:11:15.213 "zone_append": false, 00:11:15.213 "compare": false, 00:11:15.213 "compare_and_write": false, 00:11:15.213 "abort": true, 00:11:15.213 "seek_hole": false, 00:11:15.213 "seek_data": false, 00:11:15.213 "copy": true, 00:11:15.213 "nvme_iov_md": false 00:11:15.213 }, 00:11:15.213 "memory_domains": [ 00:11:15.213 { 00:11:15.213 "dma_device_id": "system", 00:11:15.213 "dma_device_type": 1 00:11:15.213 }, 00:11:15.213 { 00:11:15.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.213 "dma_device_type": 2 00:11:15.213 } 00:11:15.213 ], 00:11:15.213 "driver_specific": {} 00:11:15.213 } 00:11:15.213 ] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 BaseBdev3 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 [ 00:11:15.213 { 00:11:15.213 "name": "BaseBdev3", 00:11:15.213 "aliases": [ 00:11:15.213 "28c2e105-d307-4d49-a93b-3f86325a8e2d" 00:11:15.213 ], 00:11:15.213 "product_name": "Malloc disk", 00:11:15.213 "block_size": 512, 00:11:15.213 "num_blocks": 65536, 00:11:15.213 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:15.213 "assigned_rate_limits": { 00:11:15.213 "rw_ios_per_sec": 0, 00:11:15.213 "rw_mbytes_per_sec": 0, 00:11:15.213 "r_mbytes_per_sec": 0, 00:11:15.213 "w_mbytes_per_sec": 0 00:11:15.213 }, 00:11:15.213 "claimed": false, 00:11:15.213 "zoned": false, 00:11:15.213 "supported_io_types": { 00:11:15.213 "read": true, 00:11:15.213 "write": true, 00:11:15.213 "unmap": true, 00:11:15.213 "flush": true, 00:11:15.213 "reset": true, 00:11:15.213 "nvme_admin": false, 00:11:15.213 "nvme_io": false, 00:11:15.213 "nvme_io_md": false, 00:11:15.213 "write_zeroes": true, 00:11:15.213 "zcopy": true, 00:11:15.213 "get_zone_info": false, 00:11:15.213 "zone_management": false, 00:11:15.213 "zone_append": false, 00:11:15.213 "compare": false, 00:11:15.213 "compare_and_write": false, 00:11:15.213 "abort": true, 00:11:15.213 "seek_hole": false, 00:11:15.213 "seek_data": false, 00:11:15.213 "copy": true, 00:11:15.213 "nvme_iov_md": false 00:11:15.213 }, 00:11:15.213 "memory_domains": [ 00:11:15.213 { 00:11:15.213 "dma_device_id": "system", 00:11:15.213 "dma_device_type": 1 00:11:15.213 }, 00:11:15.213 { 00:11:15.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.213 "dma_device_type": 2 00:11:15.213 } 00:11:15.213 ], 00:11:15.213 "driver_specific": {} 00:11:15.213 } 00:11:15.213 ] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.213 BaseBdev4 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.213 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 [ 00:11:15.473 { 00:11:15.473 "name": "BaseBdev4", 00:11:15.473 "aliases": [ 00:11:15.473 "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc" 00:11:15.473 ], 00:11:15.473 "product_name": "Malloc disk", 00:11:15.473 "block_size": 512, 00:11:15.473 "num_blocks": 65536, 00:11:15.473 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:15.473 "assigned_rate_limits": { 00:11:15.473 "rw_ios_per_sec": 0, 00:11:15.473 "rw_mbytes_per_sec": 0, 00:11:15.473 "r_mbytes_per_sec": 0, 00:11:15.473 "w_mbytes_per_sec": 0 00:11:15.473 }, 00:11:15.473 "claimed": false, 00:11:15.473 "zoned": false, 00:11:15.473 "supported_io_types": { 00:11:15.473 "read": true, 00:11:15.473 "write": true, 00:11:15.473 "unmap": true, 00:11:15.473 "flush": true, 00:11:15.473 "reset": true, 00:11:15.473 "nvme_admin": false, 00:11:15.473 "nvme_io": false, 00:11:15.473 "nvme_io_md": false, 00:11:15.473 "write_zeroes": true, 00:11:15.473 "zcopy": true, 00:11:15.473 "get_zone_info": false, 00:11:15.473 "zone_management": false, 00:11:15.473 "zone_append": false, 00:11:15.473 "compare": false, 00:11:15.473 "compare_and_write": false, 00:11:15.473 "abort": true, 00:11:15.473 "seek_hole": false, 00:11:15.473 "seek_data": false, 00:11:15.473 "copy": true, 00:11:15.473 "nvme_iov_md": false 00:11:15.473 }, 00:11:15.473 "memory_domains": [ 00:11:15.473 { 00:11:15.473 "dma_device_id": "system", 00:11:15.473 "dma_device_type": 1 00:11:15.473 }, 00:11:15.473 { 00:11:15.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.473 "dma_device_type": 2 00:11:15.473 } 00:11:15.473 ], 00:11:15.473 "driver_specific": {} 00:11:15.473 } 00:11:15.473 ] 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 [2024-11-28 18:51:44.846188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.473 [2024-11-28 18:51:44.846234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.473 [2024-11-28 18:51:44.846255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.473 [2024-11-28 18:51:44.848023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.473 [2024-11-28 18:51:44.848083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.473 "name": "Existed_Raid", 00:11:15.473 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:15.473 "strip_size_kb": 0, 00:11:15.473 "state": "configuring", 00:11:15.473 "raid_level": "raid1", 00:11:15.473 "superblock": true, 00:11:15.473 "num_base_bdevs": 4, 00:11:15.473 "num_base_bdevs_discovered": 3, 00:11:15.473 "num_base_bdevs_operational": 4, 00:11:15.473 "base_bdevs_list": [ 00:11:15.473 { 00:11:15.473 "name": "BaseBdev1", 00:11:15.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.473 "is_configured": false, 00:11:15.473 "data_offset": 0, 00:11:15.473 "data_size": 0 00:11:15.473 }, 00:11:15.473 { 00:11:15.473 "name": "BaseBdev2", 00:11:15.473 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:15.473 "is_configured": true, 00:11:15.473 "data_offset": 2048, 00:11:15.473 "data_size": 63488 00:11:15.473 }, 00:11:15.473 { 00:11:15.473 "name": "BaseBdev3", 00:11:15.473 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:15.473 "is_configured": true, 00:11:15.473 "data_offset": 2048, 00:11:15.473 "data_size": 63488 00:11:15.473 }, 00:11:15.473 { 00:11:15.473 "name": "BaseBdev4", 00:11:15.473 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:15.473 "is_configured": true, 00:11:15.473 "data_offset": 2048, 00:11:15.473 "data_size": 63488 00:11:15.473 } 00:11:15.473 ] 00:11:15.473 }' 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.473 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.731 [2024-11-28 18:51:45.278261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.731 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.731 "name": "Existed_Raid", 00:11:15.731 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:15.731 "strip_size_kb": 0, 00:11:15.731 "state": "configuring", 00:11:15.731 "raid_level": "raid1", 00:11:15.731 "superblock": true, 00:11:15.731 "num_base_bdevs": 4, 00:11:15.731 "num_base_bdevs_discovered": 2, 00:11:15.731 "num_base_bdevs_operational": 4, 00:11:15.731 "base_bdevs_list": [ 00:11:15.731 { 00:11:15.731 "name": "BaseBdev1", 00:11:15.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.731 "is_configured": false, 00:11:15.731 "data_offset": 0, 00:11:15.731 "data_size": 0 00:11:15.731 }, 00:11:15.731 { 00:11:15.731 "name": null, 00:11:15.731 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:15.731 "is_configured": false, 00:11:15.731 "data_offset": 0, 00:11:15.731 "data_size": 63488 00:11:15.731 }, 00:11:15.732 { 00:11:15.732 "name": "BaseBdev3", 00:11:15.732 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:15.732 "is_configured": true, 00:11:15.732 "data_offset": 2048, 00:11:15.732 "data_size": 63488 00:11:15.732 }, 00:11:15.732 { 00:11:15.732 "name": "BaseBdev4", 00:11:15.732 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:15.732 "is_configured": true, 00:11:15.732 "data_offset": 2048, 00:11:15.732 "data_size": 63488 00:11:15.732 } 00:11:15.732 ] 00:11:15.732 }' 00:11:15.732 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.732 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.990 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.990 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.990 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.990 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.250 [2024-11-28 18:51:45.637282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.250 BaseBdev1 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.250 [ 00:11:16.250 { 00:11:16.250 "name": "BaseBdev1", 00:11:16.250 "aliases": [ 00:11:16.250 "322ec8d5-f2d8-4fe1-ba85-69151f370951" 00:11:16.250 ], 00:11:16.250 "product_name": "Malloc disk", 00:11:16.250 "block_size": 512, 00:11:16.250 "num_blocks": 65536, 00:11:16.250 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:16.250 "assigned_rate_limits": { 00:11:16.250 "rw_ios_per_sec": 0, 00:11:16.250 "rw_mbytes_per_sec": 0, 00:11:16.250 "r_mbytes_per_sec": 0, 00:11:16.250 "w_mbytes_per_sec": 0 00:11:16.250 }, 00:11:16.250 "claimed": true, 00:11:16.250 "claim_type": "exclusive_write", 00:11:16.250 "zoned": false, 00:11:16.250 "supported_io_types": { 00:11:16.250 "read": true, 00:11:16.250 "write": true, 00:11:16.250 "unmap": true, 00:11:16.250 "flush": true, 00:11:16.250 "reset": true, 00:11:16.250 "nvme_admin": false, 00:11:16.250 "nvme_io": false, 00:11:16.250 "nvme_io_md": false, 00:11:16.250 "write_zeroes": true, 00:11:16.250 "zcopy": true, 00:11:16.250 "get_zone_info": false, 00:11:16.250 "zone_management": false, 00:11:16.250 "zone_append": false, 00:11:16.250 "compare": false, 00:11:16.250 "compare_and_write": false, 00:11:16.250 "abort": true, 00:11:16.250 "seek_hole": false, 00:11:16.250 "seek_data": false, 00:11:16.250 "copy": true, 00:11:16.250 "nvme_iov_md": false 00:11:16.250 }, 00:11:16.250 "memory_domains": [ 00:11:16.250 { 00:11:16.250 "dma_device_id": "system", 00:11:16.250 "dma_device_type": 1 00:11:16.250 }, 00:11:16.250 { 00:11:16.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.250 "dma_device_type": 2 00:11:16.250 } 00:11:16.250 ], 00:11:16.250 "driver_specific": {} 00:11:16.250 } 00:11:16.250 ] 00:11:16.250 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.251 "name": "Existed_Raid", 00:11:16.251 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:16.251 "strip_size_kb": 0, 00:11:16.251 "state": "configuring", 00:11:16.251 "raid_level": "raid1", 00:11:16.251 "superblock": true, 00:11:16.251 "num_base_bdevs": 4, 00:11:16.251 "num_base_bdevs_discovered": 3, 00:11:16.251 "num_base_bdevs_operational": 4, 00:11:16.251 "base_bdevs_list": [ 00:11:16.251 { 00:11:16.251 "name": "BaseBdev1", 00:11:16.251 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:16.251 "is_configured": true, 00:11:16.251 "data_offset": 2048, 00:11:16.251 "data_size": 63488 00:11:16.251 }, 00:11:16.251 { 00:11:16.251 "name": null, 00:11:16.251 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:16.251 "is_configured": false, 00:11:16.251 "data_offset": 0, 00:11:16.251 "data_size": 63488 00:11:16.251 }, 00:11:16.251 { 00:11:16.251 "name": "BaseBdev3", 00:11:16.251 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:16.251 "is_configured": true, 00:11:16.251 "data_offset": 2048, 00:11:16.251 "data_size": 63488 00:11:16.251 }, 00:11:16.251 { 00:11:16.251 "name": "BaseBdev4", 00:11:16.251 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:16.251 "is_configured": true, 00:11:16.251 "data_offset": 2048, 00:11:16.251 "data_size": 63488 00:11:16.251 } 00:11:16.251 ] 00:11:16.251 }' 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.251 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 [2024-11-28 18:51:46.177458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.819 "name": "Existed_Raid", 00:11:16.819 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:16.819 "strip_size_kb": 0, 00:11:16.819 "state": "configuring", 00:11:16.819 "raid_level": "raid1", 00:11:16.819 "superblock": true, 00:11:16.819 "num_base_bdevs": 4, 00:11:16.819 "num_base_bdevs_discovered": 2, 00:11:16.819 "num_base_bdevs_operational": 4, 00:11:16.819 "base_bdevs_list": [ 00:11:16.819 { 00:11:16.819 "name": "BaseBdev1", 00:11:16.819 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:16.819 "is_configured": true, 00:11:16.819 "data_offset": 2048, 00:11:16.819 "data_size": 63488 00:11:16.819 }, 00:11:16.819 { 00:11:16.819 "name": null, 00:11:16.819 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:16.819 "is_configured": false, 00:11:16.819 "data_offset": 0, 00:11:16.819 "data_size": 63488 00:11:16.819 }, 00:11:16.819 { 00:11:16.819 "name": null, 00:11:16.819 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:16.819 "is_configured": false, 00:11:16.819 "data_offset": 0, 00:11:16.819 "data_size": 63488 00:11:16.819 }, 00:11:16.819 { 00:11:16.819 "name": "BaseBdev4", 00:11:16.819 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:16.819 "is_configured": true, 00:11:16.819 "data_offset": 2048, 00:11:16.819 "data_size": 63488 00:11:16.819 } 00:11:16.819 ] 00:11:16.819 }' 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.819 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 [2024-11-28 18:51:46.565634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.079 "name": "Existed_Raid", 00:11:17.079 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:17.079 "strip_size_kb": 0, 00:11:17.079 "state": "configuring", 00:11:17.079 "raid_level": "raid1", 00:11:17.079 "superblock": true, 00:11:17.079 "num_base_bdevs": 4, 00:11:17.079 "num_base_bdevs_discovered": 3, 00:11:17.079 "num_base_bdevs_operational": 4, 00:11:17.079 "base_bdevs_list": [ 00:11:17.079 { 00:11:17.079 "name": "BaseBdev1", 00:11:17.079 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": null, 00:11:17.079 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:17.079 "is_configured": false, 00:11:17.079 "data_offset": 0, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": "BaseBdev3", 00:11:17.079 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": "BaseBdev4", 00:11:17.079 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 } 00:11:17.079 ] 00:11:17.079 }' 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.079 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 [2024-11-28 18:51:47.057759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.647 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.647 "name": "Existed_Raid", 00:11:17.647 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:17.647 "strip_size_kb": 0, 00:11:17.647 "state": "configuring", 00:11:17.647 "raid_level": "raid1", 00:11:17.648 "superblock": true, 00:11:17.648 "num_base_bdevs": 4, 00:11:17.648 "num_base_bdevs_discovered": 2, 00:11:17.648 "num_base_bdevs_operational": 4, 00:11:17.648 "base_bdevs_list": [ 00:11:17.648 { 00:11:17.648 "name": null, 00:11:17.648 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:17.648 "is_configured": false, 00:11:17.648 "data_offset": 0, 00:11:17.648 "data_size": 63488 00:11:17.648 }, 00:11:17.648 { 00:11:17.648 "name": null, 00:11:17.648 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:17.648 "is_configured": false, 00:11:17.648 "data_offset": 0, 00:11:17.648 "data_size": 63488 00:11:17.648 }, 00:11:17.648 { 00:11:17.648 "name": "BaseBdev3", 00:11:17.648 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:17.648 "is_configured": true, 00:11:17.648 "data_offset": 2048, 00:11:17.648 "data_size": 63488 00:11:17.648 }, 00:11:17.648 { 00:11:17.648 "name": "BaseBdev4", 00:11:17.648 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:17.648 "is_configured": true, 00:11:17.648 "data_offset": 2048, 00:11:17.648 "data_size": 63488 00:11:17.648 } 00:11:17.648 ] 00:11:17.648 }' 00:11:17.648 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.648 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.906 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.906 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.906 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.906 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.906 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.166 [2024-11-28 18:51:47.524246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.166 "name": "Existed_Raid", 00:11:18.166 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:18.166 "strip_size_kb": 0, 00:11:18.166 "state": "configuring", 00:11:18.166 "raid_level": "raid1", 00:11:18.166 "superblock": true, 00:11:18.166 "num_base_bdevs": 4, 00:11:18.166 "num_base_bdevs_discovered": 3, 00:11:18.166 "num_base_bdevs_operational": 4, 00:11:18.166 "base_bdevs_list": [ 00:11:18.166 { 00:11:18.166 "name": null, 00:11:18.166 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:18.166 "is_configured": false, 00:11:18.166 "data_offset": 0, 00:11:18.166 "data_size": 63488 00:11:18.166 }, 00:11:18.166 { 00:11:18.166 "name": "BaseBdev2", 00:11:18.166 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:18.166 "is_configured": true, 00:11:18.166 "data_offset": 2048, 00:11:18.166 "data_size": 63488 00:11:18.166 }, 00:11:18.166 { 00:11:18.166 "name": "BaseBdev3", 00:11:18.166 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:18.166 "is_configured": true, 00:11:18.166 "data_offset": 2048, 00:11:18.166 "data_size": 63488 00:11:18.166 }, 00:11:18.166 { 00:11:18.166 "name": "BaseBdev4", 00:11:18.166 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:18.166 "is_configured": true, 00:11:18.166 "data_offset": 2048, 00:11:18.166 "data_size": 63488 00:11:18.166 } 00:11:18.166 ] 00:11:18.166 }' 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.166 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.425 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.425 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 322ec8d5-f2d8-4fe1-ba85-69151f370951 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.684 [2024-11-28 18:51:48.075437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.684 [2024-11-28 18:51:48.075678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.684 [2024-11-28 18:51:48.075692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.684 NewBaseBdev 00:11:18.684 [2024-11-28 18:51:48.075934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:18.684 [2024-11-28 18:51:48.076060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.684 [2024-11-28 18:51:48.076073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:18.684 [2024-11-28 18:51:48.076170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.684 [ 00:11:18.684 { 00:11:18.684 "name": "NewBaseBdev", 00:11:18.684 "aliases": [ 00:11:18.684 "322ec8d5-f2d8-4fe1-ba85-69151f370951" 00:11:18.684 ], 00:11:18.684 "product_name": "Malloc disk", 00:11:18.684 "block_size": 512, 00:11:18.684 "num_blocks": 65536, 00:11:18.684 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:18.684 "assigned_rate_limits": { 00:11:18.684 "rw_ios_per_sec": 0, 00:11:18.684 "rw_mbytes_per_sec": 0, 00:11:18.684 "r_mbytes_per_sec": 0, 00:11:18.684 "w_mbytes_per_sec": 0 00:11:18.684 }, 00:11:18.684 "claimed": true, 00:11:18.684 "claim_type": "exclusive_write", 00:11:18.684 "zoned": false, 00:11:18.684 "supported_io_types": { 00:11:18.684 "read": true, 00:11:18.684 "write": true, 00:11:18.684 "unmap": true, 00:11:18.684 "flush": true, 00:11:18.684 "reset": true, 00:11:18.684 "nvme_admin": false, 00:11:18.684 "nvme_io": false, 00:11:18.684 "nvme_io_md": false, 00:11:18.684 "write_zeroes": true, 00:11:18.684 "zcopy": true, 00:11:18.684 "get_zone_info": false, 00:11:18.684 "zone_management": false, 00:11:18.684 "zone_append": false, 00:11:18.684 "compare": false, 00:11:18.684 "compare_and_write": false, 00:11:18.684 "abort": true, 00:11:18.684 "seek_hole": false, 00:11:18.684 "seek_data": false, 00:11:18.684 "copy": true, 00:11:18.684 "nvme_iov_md": false 00:11:18.684 }, 00:11:18.684 "memory_domains": [ 00:11:18.684 { 00:11:18.684 "dma_device_id": "system", 00:11:18.684 "dma_device_type": 1 00:11:18.684 }, 00:11:18.684 { 00:11:18.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.684 "dma_device_type": 2 00:11:18.684 } 00:11:18.684 ], 00:11:18.684 "driver_specific": {} 00:11:18.684 } 00:11:18.684 ] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.684 "name": "Existed_Raid", 00:11:18.684 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:18.684 "strip_size_kb": 0, 00:11:18.684 "state": "online", 00:11:18.684 "raid_level": "raid1", 00:11:18.684 "superblock": true, 00:11:18.684 "num_base_bdevs": 4, 00:11:18.684 "num_base_bdevs_discovered": 4, 00:11:18.684 "num_base_bdevs_operational": 4, 00:11:18.684 "base_bdevs_list": [ 00:11:18.684 { 00:11:18.684 "name": "NewBaseBdev", 00:11:18.684 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:18.684 "is_configured": true, 00:11:18.684 "data_offset": 2048, 00:11:18.684 "data_size": 63488 00:11:18.684 }, 00:11:18.684 { 00:11:18.684 "name": "BaseBdev2", 00:11:18.684 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:18.684 "is_configured": true, 00:11:18.684 "data_offset": 2048, 00:11:18.684 "data_size": 63488 00:11:18.684 }, 00:11:18.684 { 00:11:18.684 "name": "BaseBdev3", 00:11:18.684 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:18.684 "is_configured": true, 00:11:18.684 "data_offset": 2048, 00:11:18.684 "data_size": 63488 00:11:18.684 }, 00:11:18.684 { 00:11:18.684 "name": "BaseBdev4", 00:11:18.684 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:18.684 "is_configured": true, 00:11:18.684 "data_offset": 2048, 00:11:18.684 "data_size": 63488 00:11:18.684 } 00:11:18.684 ] 00:11:18.684 }' 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.684 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.943 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.202 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.202 [2024-11-28 18:51:48.551950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.202 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.202 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.202 "name": "Existed_Raid", 00:11:19.202 "aliases": [ 00:11:19.202 "8835dccb-f53a-4a6a-aec7-5d39b6e747c9" 00:11:19.202 ], 00:11:19.202 "product_name": "Raid Volume", 00:11:19.202 "block_size": 512, 00:11:19.202 "num_blocks": 63488, 00:11:19.202 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:19.202 "assigned_rate_limits": { 00:11:19.202 "rw_ios_per_sec": 0, 00:11:19.202 "rw_mbytes_per_sec": 0, 00:11:19.202 "r_mbytes_per_sec": 0, 00:11:19.202 "w_mbytes_per_sec": 0 00:11:19.202 }, 00:11:19.202 "claimed": false, 00:11:19.202 "zoned": false, 00:11:19.202 "supported_io_types": { 00:11:19.202 "read": true, 00:11:19.202 "write": true, 00:11:19.202 "unmap": false, 00:11:19.202 "flush": false, 00:11:19.203 "reset": true, 00:11:19.203 "nvme_admin": false, 00:11:19.203 "nvme_io": false, 00:11:19.203 "nvme_io_md": false, 00:11:19.203 "write_zeroes": true, 00:11:19.203 "zcopy": false, 00:11:19.203 "get_zone_info": false, 00:11:19.203 "zone_management": false, 00:11:19.203 "zone_append": false, 00:11:19.203 "compare": false, 00:11:19.203 "compare_and_write": false, 00:11:19.203 "abort": false, 00:11:19.203 "seek_hole": false, 00:11:19.203 "seek_data": false, 00:11:19.203 "copy": false, 00:11:19.203 "nvme_iov_md": false 00:11:19.203 }, 00:11:19.203 "memory_domains": [ 00:11:19.203 { 00:11:19.203 "dma_device_id": "system", 00:11:19.203 "dma_device_type": 1 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.203 "dma_device_type": 2 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "system", 00:11:19.203 "dma_device_type": 1 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.203 "dma_device_type": 2 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "system", 00:11:19.203 "dma_device_type": 1 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.203 "dma_device_type": 2 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "system", 00:11:19.203 "dma_device_type": 1 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.203 "dma_device_type": 2 00:11:19.203 } 00:11:19.203 ], 00:11:19.203 "driver_specific": { 00:11:19.203 "raid": { 00:11:19.203 "uuid": "8835dccb-f53a-4a6a-aec7-5d39b6e747c9", 00:11:19.203 "strip_size_kb": 0, 00:11:19.203 "state": "online", 00:11:19.203 "raid_level": "raid1", 00:11:19.203 "superblock": true, 00:11:19.203 "num_base_bdevs": 4, 00:11:19.203 "num_base_bdevs_discovered": 4, 00:11:19.203 "num_base_bdevs_operational": 4, 00:11:19.203 "base_bdevs_list": [ 00:11:19.203 { 00:11:19.203 "name": "NewBaseBdev", 00:11:19.203 "uuid": "322ec8d5-f2d8-4fe1-ba85-69151f370951", 00:11:19.203 "is_configured": true, 00:11:19.203 "data_offset": 2048, 00:11:19.203 "data_size": 63488 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "name": "BaseBdev2", 00:11:19.203 "uuid": "46146cd5-d75b-4966-bf27-fca2d4a5ae95", 00:11:19.203 "is_configured": true, 00:11:19.203 "data_offset": 2048, 00:11:19.203 "data_size": 63488 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "name": "BaseBdev3", 00:11:19.203 "uuid": "28c2e105-d307-4d49-a93b-3f86325a8e2d", 00:11:19.203 "is_configured": true, 00:11:19.203 "data_offset": 2048, 00:11:19.203 "data_size": 63488 00:11:19.203 }, 00:11:19.203 { 00:11:19.203 "name": "BaseBdev4", 00:11:19.203 "uuid": "f8ed8985-c5e9-4b2e-a4ff-7c2080851edc", 00:11:19.203 "is_configured": true, 00:11:19.203 "data_offset": 2048, 00:11:19.203 "data_size": 63488 00:11:19.203 } 00:11:19.203 ] 00:11:19.203 } 00:11:19.203 } 00:11:19.203 }' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:19.203 BaseBdev2 00:11:19.203 BaseBdev3 00:11:19.203 BaseBdev4' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.203 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.464 [2024-11-28 18:51:48.879709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.464 [2024-11-28 18:51:48.879738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.464 [2024-11-28 18:51:48.879810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.464 [2024-11-28 18:51:48.880084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.464 [2024-11-28 18:51:48.880095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86125 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86125 ']' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86125 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86125 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86125' 00:11:19.464 killing process with pid 86125 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86125 00:11:19.464 [2024-11-28 18:51:48.928809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.464 18:51:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86125 00:11:19.464 [2024-11-28 18:51:48.969026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.724 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.724 00:11:19.724 real 0m9.107s 00:11:19.724 user 0m15.683s 00:11:19.724 sys 0m1.835s 00:11:19.724 18:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.724 18:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.724 ************************************ 00:11:19.724 END TEST raid_state_function_test_sb 00:11:19.724 ************************************ 00:11:19.724 18:51:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:19.724 18:51:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:19.724 18:51:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.724 18:51:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.724 ************************************ 00:11:19.724 START TEST raid_superblock_test 00:11:19.724 ************************************ 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86769 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86769 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 86769 ']' 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.724 18:51:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.984 [2024-11-28 18:51:49.353110] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:19.984 [2024-11-28 18:51:49.353317] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86769 ] 00:11:19.984 [2024-11-28 18:51:49.487828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:19.984 [2024-11-28 18:51:49.525374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.984 [2024-11-28 18:51:49.550093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.243 [2024-11-28 18:51:49.592006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.243 [2024-11-28 18:51:49.592121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.812 malloc1 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.812 [2024-11-28 18:51:50.191877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.812 [2024-11-28 18:51:50.191982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.812 [2024-11-28 18:51:50.192025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:20.812 [2024-11-28 18:51:50.192075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.812 [2024-11-28 18:51:50.194140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.812 [2024-11-28 18:51:50.194228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.812 pt1 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.812 malloc2 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.812 [2024-11-28 18:51:50.220264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.812 [2024-11-28 18:51:50.220353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.812 [2024-11-28 18:51:50.220388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:20.812 [2024-11-28 18:51:50.220447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.812 [2024-11-28 18:51:50.222453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.812 [2024-11-28 18:51:50.222519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.812 pt2 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.812 malloc3 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:20.812 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.813 [2024-11-28 18:51:50.252652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:20.813 [2024-11-28 18:51:50.252698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.813 [2024-11-28 18:51:50.252716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:20.813 [2024-11-28 18:51:50.252725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.813 [2024-11-28 18:51:50.254730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.813 [2024-11-28 18:51:50.254810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:20.813 pt3 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.813 malloc4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.813 [2024-11-28 18:51:50.298067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:20.813 [2024-11-28 18:51:50.298204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.813 [2024-11-28 18:51:50.298270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:20.813 [2024-11-28 18:51:50.298323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.813 [2024-11-28 18:51:50.301412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.813 [2024-11-28 18:51:50.301523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:20.813 pt4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.813 [2024-11-28 18:51:50.310184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.813 [2024-11-28 18:51:50.312188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.813 [2024-11-28 18:51:50.312311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.813 [2024-11-28 18:51:50.312405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:20.813 [2024-11-28 18:51:50.312631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:20.813 [2024-11-28 18:51:50.312684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.813 [2024-11-28 18:51:50.312976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:20.813 [2024-11-28 18:51:50.313196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:20.813 [2024-11-28 18:51:50.313252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:20.813 [2024-11-28 18:51:50.313451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.813 "name": "raid_bdev1", 00:11:20.813 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:20.813 "strip_size_kb": 0, 00:11:20.813 "state": "online", 00:11:20.813 "raid_level": "raid1", 00:11:20.813 "superblock": true, 00:11:20.813 "num_base_bdevs": 4, 00:11:20.813 "num_base_bdevs_discovered": 4, 00:11:20.813 "num_base_bdevs_operational": 4, 00:11:20.813 "base_bdevs_list": [ 00:11:20.813 { 00:11:20.813 "name": "pt1", 00:11:20.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.813 "is_configured": true, 00:11:20.813 "data_offset": 2048, 00:11:20.813 "data_size": 63488 00:11:20.813 }, 00:11:20.813 { 00:11:20.813 "name": "pt2", 00:11:20.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.813 "is_configured": true, 00:11:20.813 "data_offset": 2048, 00:11:20.813 "data_size": 63488 00:11:20.813 }, 00:11:20.813 { 00:11:20.813 "name": "pt3", 00:11:20.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.813 "is_configured": true, 00:11:20.813 "data_offset": 2048, 00:11:20.813 "data_size": 63488 00:11:20.813 }, 00:11:20.813 { 00:11:20.813 "name": "pt4", 00:11:20.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.813 "is_configured": true, 00:11:20.813 "data_offset": 2048, 00:11:20.813 "data_size": 63488 00:11:20.813 } 00:11:20.813 ] 00:11:20.813 }' 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.813 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.382 [2024-11-28 18:51:50.694537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.382 "name": "raid_bdev1", 00:11:21.382 "aliases": [ 00:11:21.382 "28917192-b9c0-4f1d-b129-ca64560ffeae" 00:11:21.382 ], 00:11:21.382 "product_name": "Raid Volume", 00:11:21.382 "block_size": 512, 00:11:21.382 "num_blocks": 63488, 00:11:21.382 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:21.382 "assigned_rate_limits": { 00:11:21.382 "rw_ios_per_sec": 0, 00:11:21.382 "rw_mbytes_per_sec": 0, 00:11:21.382 "r_mbytes_per_sec": 0, 00:11:21.382 "w_mbytes_per_sec": 0 00:11:21.382 }, 00:11:21.382 "claimed": false, 00:11:21.382 "zoned": false, 00:11:21.382 "supported_io_types": { 00:11:21.382 "read": true, 00:11:21.382 "write": true, 00:11:21.382 "unmap": false, 00:11:21.382 "flush": false, 00:11:21.382 "reset": true, 00:11:21.382 "nvme_admin": false, 00:11:21.382 "nvme_io": false, 00:11:21.382 "nvme_io_md": false, 00:11:21.382 "write_zeroes": true, 00:11:21.382 "zcopy": false, 00:11:21.382 "get_zone_info": false, 00:11:21.382 "zone_management": false, 00:11:21.382 "zone_append": false, 00:11:21.382 "compare": false, 00:11:21.382 "compare_and_write": false, 00:11:21.382 "abort": false, 00:11:21.382 "seek_hole": false, 00:11:21.382 "seek_data": false, 00:11:21.382 "copy": false, 00:11:21.382 "nvme_iov_md": false 00:11:21.382 }, 00:11:21.382 "memory_domains": [ 00:11:21.382 { 00:11:21.382 "dma_device_id": "system", 00:11:21.382 "dma_device_type": 1 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.382 "dma_device_type": 2 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "system", 00:11:21.382 "dma_device_type": 1 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.382 "dma_device_type": 2 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "system", 00:11:21.382 "dma_device_type": 1 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.382 "dma_device_type": 2 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "system", 00:11:21.382 "dma_device_type": 1 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.382 "dma_device_type": 2 00:11:21.382 } 00:11:21.382 ], 00:11:21.382 "driver_specific": { 00:11:21.382 "raid": { 00:11:21.382 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:21.382 "strip_size_kb": 0, 00:11:21.382 "state": "online", 00:11:21.382 "raid_level": "raid1", 00:11:21.382 "superblock": true, 00:11:21.382 "num_base_bdevs": 4, 00:11:21.382 "num_base_bdevs_discovered": 4, 00:11:21.382 "num_base_bdevs_operational": 4, 00:11:21.382 "base_bdevs_list": [ 00:11:21.382 { 00:11:21.382 "name": "pt1", 00:11:21.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.382 "is_configured": true, 00:11:21.382 "data_offset": 2048, 00:11:21.382 "data_size": 63488 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "name": "pt2", 00:11:21.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.382 "is_configured": true, 00:11:21.382 "data_offset": 2048, 00:11:21.382 "data_size": 63488 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "name": "pt3", 00:11:21.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.382 "is_configured": true, 00:11:21.382 "data_offset": 2048, 00:11:21.382 "data_size": 63488 00:11:21.382 }, 00:11:21.382 { 00:11:21.382 "name": "pt4", 00:11:21.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.382 "is_configured": true, 00:11:21.382 "data_offset": 2048, 00:11:21.382 "data_size": 63488 00:11:21.382 } 00:11:21.382 ] 00:11:21.382 } 00:11:21.382 } 00:11:21.382 }' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:21.382 pt2 00:11:21.382 pt3 00:11:21.382 pt4' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.382 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.383 18:51:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.642 [2024-11-28 18:51:51.018602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=28917192-b9c0-4f1d-b129-ca64560ffeae 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 28917192-b9c0-4f1d-b129-ca64560ffeae ']' 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.642 [2024-11-28 18:51:51.050290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.642 [2024-11-28 18:51:51.050352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.642 [2024-11-28 18:51:51.050481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.642 [2024-11-28 18:51:51.050606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.642 [2024-11-28 18:51:51.050662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.642 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 [2024-11-28 18:51:51.214373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:21.643 [2024-11-28 18:51:51.216228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:21.643 [2024-11-28 18:51:51.216274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:21.643 [2024-11-28 18:51:51.216303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:21.643 [2024-11-28 18:51:51.216346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:21.643 [2024-11-28 18:51:51.216397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:21.643 [2024-11-28 18:51:51.216414] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:21.643 [2024-11-28 18:51:51.216440] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:21.643 [2024-11-28 18:51:51.216453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.643 [2024-11-28 18:51:51.216462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:21.643 request: 00:11:21.643 { 00:11:21.643 "name": "raid_bdev1", 00:11:21.643 "raid_level": "raid1", 00:11:21.643 "base_bdevs": [ 00:11:21.643 "malloc1", 00:11:21.643 "malloc2", 00:11:21.643 "malloc3", 00:11:21.643 "malloc4" 00:11:21.643 ], 00:11:21.643 "superblock": false, 00:11:21.643 "method": "bdev_raid_create", 00:11:21.643 "req_id": 1 00:11:21.643 } 00:11:21.643 Got JSON-RPC error response 00:11:21.643 response: 00:11:21.643 { 00:11:21.643 "code": -17, 00:11:21.643 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:21.643 } 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.643 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.902 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:21.902 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:21.902 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:21.902 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.902 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.902 [2024-11-28 18:51:51.262360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:21.902 [2024-11-28 18:51:51.262474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.902 [2024-11-28 18:51:51.262507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.902 [2024-11-28 18:51:51.262538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.902 [2024-11-28 18:51:51.264607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.902 [2024-11-28 18:51:51.264678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:21.902 [2024-11-28 18:51:51.264782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:21.902 [2024-11-28 18:51:51.264845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:21.903 pt1 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.903 "name": "raid_bdev1", 00:11:21.903 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:21.903 "strip_size_kb": 0, 00:11:21.903 "state": "configuring", 00:11:21.903 "raid_level": "raid1", 00:11:21.903 "superblock": true, 00:11:21.903 "num_base_bdevs": 4, 00:11:21.903 "num_base_bdevs_discovered": 1, 00:11:21.903 "num_base_bdevs_operational": 4, 00:11:21.903 "base_bdevs_list": [ 00:11:21.903 { 00:11:21.903 "name": "pt1", 00:11:21.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.903 "is_configured": true, 00:11:21.903 "data_offset": 2048, 00:11:21.903 "data_size": 63488 00:11:21.903 }, 00:11:21.903 { 00:11:21.903 "name": null, 00:11:21.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.903 "is_configured": false, 00:11:21.903 "data_offset": 2048, 00:11:21.903 "data_size": 63488 00:11:21.903 }, 00:11:21.903 { 00:11:21.903 "name": null, 00:11:21.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.903 "is_configured": false, 00:11:21.903 "data_offset": 2048, 00:11:21.903 "data_size": 63488 00:11:21.903 }, 00:11:21.903 { 00:11:21.903 "name": null, 00:11:21.903 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.903 "is_configured": false, 00:11:21.903 "data_offset": 2048, 00:11:21.903 "data_size": 63488 00:11:21.903 } 00:11:21.903 ] 00:11:21.903 }' 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.903 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.163 [2024-11-28 18:51:51.666488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.163 [2024-11-28 18:51:51.666544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.163 [2024-11-28 18:51:51.666563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:22.163 [2024-11-28 18:51:51.666573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.163 [2024-11-28 18:51:51.666910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.163 [2024-11-28 18:51:51.666942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.163 [2024-11-28 18:51:51.667004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:22.163 [2024-11-28 18:51:51.667026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.163 pt2 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.163 [2024-11-28 18:51:51.674481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.163 "name": "raid_bdev1", 00:11:22.163 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:22.163 "strip_size_kb": 0, 00:11:22.163 "state": "configuring", 00:11:22.163 "raid_level": "raid1", 00:11:22.163 "superblock": true, 00:11:22.163 "num_base_bdevs": 4, 00:11:22.163 "num_base_bdevs_discovered": 1, 00:11:22.163 "num_base_bdevs_operational": 4, 00:11:22.163 "base_bdevs_list": [ 00:11:22.163 { 00:11:22.163 "name": "pt1", 00:11:22.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.163 "is_configured": true, 00:11:22.163 "data_offset": 2048, 00:11:22.163 "data_size": 63488 00:11:22.163 }, 00:11:22.163 { 00:11:22.163 "name": null, 00:11:22.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.163 "is_configured": false, 00:11:22.163 "data_offset": 0, 00:11:22.163 "data_size": 63488 00:11:22.163 }, 00:11:22.163 { 00:11:22.163 "name": null, 00:11:22.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.163 "is_configured": false, 00:11:22.163 "data_offset": 2048, 00:11:22.163 "data_size": 63488 00:11:22.163 }, 00:11:22.163 { 00:11:22.163 "name": null, 00:11:22.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.163 "is_configured": false, 00:11:22.163 "data_offset": 2048, 00:11:22.163 "data_size": 63488 00:11:22.163 } 00:11:22.163 ] 00:11:22.163 }' 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.163 18:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 [2024-11-28 18:51:52.154621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.732 [2024-11-28 18:51:52.154720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.732 [2024-11-28 18:51:52.154755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:22.732 [2024-11-28 18:51:52.154781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.732 [2024-11-28 18:51:52.155172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.732 [2024-11-28 18:51:52.155233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.732 [2024-11-28 18:51:52.155336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:22.732 [2024-11-28 18:51:52.155383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.732 pt2 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 [2024-11-28 18:51:52.166614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:22.732 [2024-11-28 18:51:52.166696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.732 [2024-11-28 18:51:52.166729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:22.732 [2024-11-28 18:51:52.166755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.732 [2024-11-28 18:51:52.167090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.732 [2024-11-28 18:51:52.167148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:22.732 [2024-11-28 18:51:52.167228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:22.732 [2024-11-28 18:51:52.167271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.732 pt3 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 [2024-11-28 18:51:52.178611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:22.732 [2024-11-28 18:51:52.178685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.732 [2024-11-28 18:51:52.178714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:22.732 [2024-11-28 18:51:52.178740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.732 [2024-11-28 18:51:52.179063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.732 [2024-11-28 18:51:52.179114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:22.732 [2024-11-28 18:51:52.179191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:22.732 [2024-11-28 18:51:52.179259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:22.732 [2024-11-28 18:51:52.179409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:22.732 [2024-11-28 18:51:52.179420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.732 [2024-11-28 18:51:52.179666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.732 [2024-11-28 18:51:52.179794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:22.732 [2024-11-28 18:51:52.179806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:22.732 [2024-11-28 18:51:52.179900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.732 pt4 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.732 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.732 "name": "raid_bdev1", 00:11:22.732 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:22.732 "strip_size_kb": 0, 00:11:22.732 "state": "online", 00:11:22.732 "raid_level": "raid1", 00:11:22.732 "superblock": true, 00:11:22.732 "num_base_bdevs": 4, 00:11:22.732 "num_base_bdevs_discovered": 4, 00:11:22.732 "num_base_bdevs_operational": 4, 00:11:22.732 "base_bdevs_list": [ 00:11:22.732 { 00:11:22.732 "name": "pt1", 00:11:22.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.732 "is_configured": true, 00:11:22.732 "data_offset": 2048, 00:11:22.733 "data_size": 63488 00:11:22.733 }, 00:11:22.733 { 00:11:22.733 "name": "pt2", 00:11:22.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.733 "is_configured": true, 00:11:22.733 "data_offset": 2048, 00:11:22.733 "data_size": 63488 00:11:22.733 }, 00:11:22.733 { 00:11:22.733 "name": "pt3", 00:11:22.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.733 "is_configured": true, 00:11:22.733 "data_offset": 2048, 00:11:22.733 "data_size": 63488 00:11:22.733 }, 00:11:22.733 { 00:11:22.733 "name": "pt4", 00:11:22.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.733 "is_configured": true, 00:11:22.733 "data_offset": 2048, 00:11:22.733 "data_size": 63488 00:11:22.733 } 00:11:22.733 ] 00:11:22.733 }' 00:11:22.733 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.733 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.992 [2024-11-28 18:51:52.558996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.992 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.992 "name": "raid_bdev1", 00:11:22.992 "aliases": [ 00:11:22.992 "28917192-b9c0-4f1d-b129-ca64560ffeae" 00:11:22.992 ], 00:11:22.992 "product_name": "Raid Volume", 00:11:22.992 "block_size": 512, 00:11:22.992 "num_blocks": 63488, 00:11:22.992 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:22.992 "assigned_rate_limits": { 00:11:22.992 "rw_ios_per_sec": 0, 00:11:22.992 "rw_mbytes_per_sec": 0, 00:11:22.992 "r_mbytes_per_sec": 0, 00:11:22.992 "w_mbytes_per_sec": 0 00:11:22.992 }, 00:11:22.992 "claimed": false, 00:11:22.992 "zoned": false, 00:11:22.992 "supported_io_types": { 00:11:22.992 "read": true, 00:11:22.992 "write": true, 00:11:22.992 "unmap": false, 00:11:22.992 "flush": false, 00:11:22.992 "reset": true, 00:11:22.992 "nvme_admin": false, 00:11:22.992 "nvme_io": false, 00:11:22.992 "nvme_io_md": false, 00:11:22.992 "write_zeroes": true, 00:11:22.992 "zcopy": false, 00:11:22.992 "get_zone_info": false, 00:11:22.992 "zone_management": false, 00:11:22.992 "zone_append": false, 00:11:22.992 "compare": false, 00:11:22.992 "compare_and_write": false, 00:11:22.992 "abort": false, 00:11:22.992 "seek_hole": false, 00:11:22.992 "seek_data": false, 00:11:22.992 "copy": false, 00:11:22.992 "nvme_iov_md": false 00:11:22.992 }, 00:11:22.992 "memory_domains": [ 00:11:22.992 { 00:11:22.992 "dma_device_id": "system", 00:11:22.992 "dma_device_type": 1 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.992 "dma_device_type": 2 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "system", 00:11:22.992 "dma_device_type": 1 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.992 "dma_device_type": 2 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "system", 00:11:22.992 "dma_device_type": 1 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.992 "dma_device_type": 2 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "system", 00:11:22.992 "dma_device_type": 1 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.992 "dma_device_type": 2 00:11:22.992 } 00:11:22.992 ], 00:11:22.992 "driver_specific": { 00:11:22.992 "raid": { 00:11:22.992 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:22.992 "strip_size_kb": 0, 00:11:22.992 "state": "online", 00:11:22.992 "raid_level": "raid1", 00:11:22.992 "superblock": true, 00:11:22.992 "num_base_bdevs": 4, 00:11:22.992 "num_base_bdevs_discovered": 4, 00:11:22.992 "num_base_bdevs_operational": 4, 00:11:22.992 "base_bdevs_list": [ 00:11:22.992 { 00:11:22.992 "name": "pt1", 00:11:22.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.992 "is_configured": true, 00:11:22.992 "data_offset": 2048, 00:11:22.992 "data_size": 63488 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "name": "pt2", 00:11:22.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.992 "is_configured": true, 00:11:22.992 "data_offset": 2048, 00:11:22.992 "data_size": 63488 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "name": "pt3", 00:11:22.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.992 "is_configured": true, 00:11:22.992 "data_offset": 2048, 00:11:22.992 "data_size": 63488 00:11:22.992 }, 00:11:22.992 { 00:11:22.992 "name": "pt4", 00:11:22.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.992 "is_configured": true, 00:11:22.992 "data_offset": 2048, 00:11:22.992 "data_size": 63488 00:11:22.992 } 00:11:22.992 ] 00:11:22.992 } 00:11:22.992 } 00:11:22.992 }' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:23.252 pt2 00:11:23.252 pt3 00:11:23.252 pt4' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.252 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.511 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 [2024-11-28 18:51:52.867112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 28917192-b9c0-4f1d-b129-ca64560ffeae '!=' 28917192-b9c0-4f1d-b129-ca64560ffeae ']' 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 [2024-11-28 18:51:52.914856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.512 "name": "raid_bdev1", 00:11:23.512 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:23.512 "strip_size_kb": 0, 00:11:23.512 "state": "online", 00:11:23.512 "raid_level": "raid1", 00:11:23.512 "superblock": true, 00:11:23.512 "num_base_bdevs": 4, 00:11:23.512 "num_base_bdevs_discovered": 3, 00:11:23.512 "num_base_bdevs_operational": 3, 00:11:23.512 "base_bdevs_list": [ 00:11:23.512 { 00:11:23.512 "name": null, 00:11:23.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.512 "is_configured": false, 00:11:23.512 "data_offset": 0, 00:11:23.512 "data_size": 63488 00:11:23.512 }, 00:11:23.512 { 00:11:23.512 "name": "pt2", 00:11:23.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.512 "is_configured": true, 00:11:23.512 "data_offset": 2048, 00:11:23.512 "data_size": 63488 00:11:23.512 }, 00:11:23.512 { 00:11:23.512 "name": "pt3", 00:11:23.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.512 "is_configured": true, 00:11:23.512 "data_offset": 2048, 00:11:23.512 "data_size": 63488 00:11:23.512 }, 00:11:23.512 { 00:11:23.512 "name": "pt4", 00:11:23.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.512 "is_configured": true, 00:11:23.512 "data_offset": 2048, 00:11:23.512 "data_size": 63488 00:11:23.512 } 00:11:23.512 ] 00:11:23.512 }' 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.512 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 [2024-11-28 18:51:53.322940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.771 [2024-11-28 18:51:53.323014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.771 [2024-11-28 18:51:53.323108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.771 [2024-11-28 18:51:53.323200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.771 [2024-11-28 18:51:53.323249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 [2024-11-28 18:51:53.418944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.030 [2024-11-28 18:51:53.418992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.030 [2024-11-28 18:51:53.419008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:24.030 [2024-11-28 18:51:53.419017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.030 [2024-11-28 18:51:53.421114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.030 [2024-11-28 18:51:53.421150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.030 [2024-11-28 18:51:53.421227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:24.030 [2024-11-28 18:51:53.421257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.030 pt2 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.030 "name": "raid_bdev1", 00:11:24.030 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:24.030 "strip_size_kb": 0, 00:11:24.030 "state": "configuring", 00:11:24.030 "raid_level": "raid1", 00:11:24.030 "superblock": true, 00:11:24.030 "num_base_bdevs": 4, 00:11:24.030 "num_base_bdevs_discovered": 1, 00:11:24.030 "num_base_bdevs_operational": 3, 00:11:24.030 "base_bdevs_list": [ 00:11:24.030 { 00:11:24.030 "name": null, 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.030 "is_configured": false, 00:11:24.030 "data_offset": 2048, 00:11:24.030 "data_size": 63488 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": "pt2", 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.030 "is_configured": true, 00:11:24.030 "data_offset": 2048, 00:11:24.030 "data_size": 63488 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": null, 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.030 "is_configured": false, 00:11:24.030 "data_offset": 2048, 00:11:24.030 "data_size": 63488 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": null, 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.030 "is_configured": false, 00:11:24.031 "data_offset": 2048, 00:11:24.031 "data_size": 63488 00:11:24.031 } 00:11:24.031 ] 00:11:24.031 }' 00:11:24.031 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.031 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.290 [2024-11-28 18:51:53.755057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.290 [2024-11-28 18:51:53.755148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.290 [2024-11-28 18:51:53.755186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:24.290 [2024-11-28 18:51:53.755213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.290 [2024-11-28 18:51:53.755645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.290 [2024-11-28 18:51:53.755700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.290 [2024-11-28 18:51:53.755793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:24.290 [2024-11-28 18:51:53.755840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.290 pt3 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.290 "name": "raid_bdev1", 00:11:24.290 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:24.290 "strip_size_kb": 0, 00:11:24.290 "state": "configuring", 00:11:24.290 "raid_level": "raid1", 00:11:24.290 "superblock": true, 00:11:24.290 "num_base_bdevs": 4, 00:11:24.290 "num_base_bdevs_discovered": 2, 00:11:24.290 "num_base_bdevs_operational": 3, 00:11:24.290 "base_bdevs_list": [ 00:11:24.290 { 00:11:24.290 "name": null, 00:11:24.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.290 "is_configured": false, 00:11:24.290 "data_offset": 2048, 00:11:24.290 "data_size": 63488 00:11:24.290 }, 00:11:24.290 { 00:11:24.290 "name": "pt2", 00:11:24.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.290 "is_configured": true, 00:11:24.290 "data_offset": 2048, 00:11:24.290 "data_size": 63488 00:11:24.290 }, 00:11:24.290 { 00:11:24.290 "name": "pt3", 00:11:24.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.290 "is_configured": true, 00:11:24.290 "data_offset": 2048, 00:11:24.290 "data_size": 63488 00:11:24.290 }, 00:11:24.290 { 00:11:24.290 "name": null, 00:11:24.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.290 "is_configured": false, 00:11:24.290 "data_offset": 2048, 00:11:24.290 "data_size": 63488 00:11:24.290 } 00:11:24.290 ] 00:11:24.290 }' 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.290 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.858 [2024-11-28 18:51:54.215193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.858 [2024-11-28 18:51:54.215307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.858 [2024-11-28 18:51:54.215336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:24.858 [2024-11-28 18:51:54.215344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.858 [2024-11-28 18:51:54.215819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.858 [2024-11-28 18:51:54.215839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.858 [2024-11-28 18:51:54.215919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:24.858 [2024-11-28 18:51:54.215943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.858 [2024-11-28 18:51:54.216045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.858 [2024-11-28 18:51:54.216053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.858 [2024-11-28 18:51:54.216283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:24.858 [2024-11-28 18:51:54.216404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.858 [2024-11-28 18:51:54.216417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:24.858 [2024-11-28 18:51:54.216539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.858 pt4 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.858 "name": "raid_bdev1", 00:11:24.858 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:24.858 "strip_size_kb": 0, 00:11:24.858 "state": "online", 00:11:24.858 "raid_level": "raid1", 00:11:24.858 "superblock": true, 00:11:24.858 "num_base_bdevs": 4, 00:11:24.858 "num_base_bdevs_discovered": 3, 00:11:24.858 "num_base_bdevs_operational": 3, 00:11:24.858 "base_bdevs_list": [ 00:11:24.858 { 00:11:24.858 "name": null, 00:11:24.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.858 "is_configured": false, 00:11:24.858 "data_offset": 2048, 00:11:24.858 "data_size": 63488 00:11:24.858 }, 00:11:24.858 { 00:11:24.858 "name": "pt2", 00:11:24.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.858 "is_configured": true, 00:11:24.858 "data_offset": 2048, 00:11:24.858 "data_size": 63488 00:11:24.858 }, 00:11:24.858 { 00:11:24.858 "name": "pt3", 00:11:24.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.858 "is_configured": true, 00:11:24.858 "data_offset": 2048, 00:11:24.858 "data_size": 63488 00:11:24.858 }, 00:11:24.858 { 00:11:24.858 "name": "pt4", 00:11:24.858 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.858 "is_configured": true, 00:11:24.858 "data_offset": 2048, 00:11:24.858 "data_size": 63488 00:11:24.858 } 00:11:24.858 ] 00:11:24.858 }' 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.858 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.118 [2024-11-28 18:51:54.611258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.118 [2024-11-28 18:51:54.611326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.118 [2024-11-28 18:51:54.611410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.118 [2024-11-28 18:51:54.611522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.118 [2024-11-28 18:51:54.611574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.118 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.119 [2024-11-28 18:51:54.667278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.119 [2024-11-28 18:51:54.667384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.119 [2024-11-28 18:51:54.667419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:25.119 [2024-11-28 18:51:54.667479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.119 [2024-11-28 18:51:54.669607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.119 [2024-11-28 18:51:54.669692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.119 [2024-11-28 18:51:54.669773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:25.119 [2024-11-28 18:51:54.669838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.119 [2024-11-28 18:51:54.669971] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:25.119 [2024-11-28 18:51:54.670038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.119 [2024-11-28 18:51:54.670070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:11:25.119 [2024-11-28 18:51:54.670151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.119 [2024-11-28 18:51:54.670275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:25.119 pt1 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.119 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.379 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.379 "name": "raid_bdev1", 00:11:25.379 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:25.379 "strip_size_kb": 0, 00:11:25.379 "state": "configuring", 00:11:25.379 "raid_level": "raid1", 00:11:25.379 "superblock": true, 00:11:25.379 "num_base_bdevs": 4, 00:11:25.379 "num_base_bdevs_discovered": 2, 00:11:25.379 "num_base_bdevs_operational": 3, 00:11:25.379 "base_bdevs_list": [ 00:11:25.379 { 00:11:25.379 "name": null, 00:11:25.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.379 "is_configured": false, 00:11:25.379 "data_offset": 2048, 00:11:25.379 "data_size": 63488 00:11:25.379 }, 00:11:25.379 { 00:11:25.379 "name": "pt2", 00:11:25.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.379 "is_configured": true, 00:11:25.379 "data_offset": 2048, 00:11:25.379 "data_size": 63488 00:11:25.379 }, 00:11:25.379 { 00:11:25.379 "name": "pt3", 00:11:25.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.379 "is_configured": true, 00:11:25.379 "data_offset": 2048, 00:11:25.379 "data_size": 63488 00:11:25.379 }, 00:11:25.379 { 00:11:25.379 "name": null, 00:11:25.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.379 "is_configured": false, 00:11:25.379 "data_offset": 2048, 00:11:25.379 "data_size": 63488 00:11:25.379 } 00:11:25.379 ] 00:11:25.379 }' 00:11:25.379 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.379 18:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.637 [2024-11-28 18:51:55.147400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:25.637 [2024-11-28 18:51:55.147480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.637 [2024-11-28 18:51:55.147502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:25.637 [2024-11-28 18:51:55.147511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.637 [2024-11-28 18:51:55.147900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.637 [2024-11-28 18:51:55.147923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:25.637 [2024-11-28 18:51:55.147990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:25.637 [2024-11-28 18:51:55.148010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:25.637 [2024-11-28 18:51:55.148114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:25.637 [2024-11-28 18:51:55.148123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.637 [2024-11-28 18:51:55.148358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:25.637 [2024-11-28 18:51:55.148489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:25.637 [2024-11-28 18:51:55.148502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:25.637 [2024-11-28 18:51:55.148607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.637 pt4 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.637 "name": "raid_bdev1", 00:11:25.637 "uuid": "28917192-b9c0-4f1d-b129-ca64560ffeae", 00:11:25.637 "strip_size_kb": 0, 00:11:25.637 "state": "online", 00:11:25.637 "raid_level": "raid1", 00:11:25.637 "superblock": true, 00:11:25.637 "num_base_bdevs": 4, 00:11:25.637 "num_base_bdevs_discovered": 3, 00:11:25.637 "num_base_bdevs_operational": 3, 00:11:25.637 "base_bdevs_list": [ 00:11:25.637 { 00:11:25.637 "name": null, 00:11:25.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.637 "is_configured": false, 00:11:25.637 "data_offset": 2048, 00:11:25.637 "data_size": 63488 00:11:25.637 }, 00:11:25.637 { 00:11:25.637 "name": "pt2", 00:11:25.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.637 "is_configured": true, 00:11:25.637 "data_offset": 2048, 00:11:25.637 "data_size": 63488 00:11:25.637 }, 00:11:25.637 { 00:11:25.637 "name": "pt3", 00:11:25.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.637 "is_configured": true, 00:11:25.637 "data_offset": 2048, 00:11:25.637 "data_size": 63488 00:11:25.637 }, 00:11:25.637 { 00:11:25.637 "name": "pt4", 00:11:25.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.637 "is_configured": true, 00:11:25.637 "data_offset": 2048, 00:11:25.637 "data_size": 63488 00:11:25.637 } 00:11:25.637 ] 00:11:25.637 }' 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.637 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:26.206 [2024-11-28 18:51:55.595821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 28917192-b9c0-4f1d-b129-ca64560ffeae '!=' 28917192-b9c0-4f1d-b129-ca64560ffeae ']' 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86769 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 86769 ']' 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 86769 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86769 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86769' 00:11:26.206 killing process with pid 86769 00:11:26.206 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 86769 00:11:26.206 [2024-11-28 18:51:55.679437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.206 [2024-11-28 18:51:55.679579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.206 [2024-11-28 18:51:55.679689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 86769 00:11:26.206 ee all in destruct 00:11:26.206 [2024-11-28 18:51:55.679757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:26.206 [2024-11-28 18:51:55.722860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.467 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:26.467 00:11:26.467 real 0m6.672s 00:11:26.467 user 0m11.203s 00:11:26.467 sys 0m1.381s 00:11:26.467 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.467 ************************************ 00:11:26.467 END TEST raid_superblock_test 00:11:26.467 ************************************ 00:11:26.467 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.467 18:51:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:26.467 18:51:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:26.467 18:51:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.467 18:51:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.467 ************************************ 00:11:26.467 START TEST raid_read_error_test 00:11:26.467 ************************************ 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dZrxN3tsJA 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87240 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87240 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87240 ']' 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.467 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.728 [2024-11-28 18:51:56.114992] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:26.728 [2024-11-28 18:51:56.115110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87240 ] 00:11:26.728 [2024-11-28 18:51:56.247824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:26.728 [2024-11-28 18:51:56.274706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.728 [2024-11-28 18:51:56.300553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.988 [2024-11-28 18:51:56.342536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.988 [2024-11-28 18:51:56.342581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 BaseBdev1_malloc 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 true 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 [2024-11-28 18:51:56.962356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:27.560 [2024-11-28 18:51:56.962420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.560 [2024-11-28 18:51:56.962448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:27.560 [2024-11-28 18:51:56.962461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.560 [2024-11-28 18:51:56.964595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.560 [2024-11-28 18:51:56.964634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.560 BaseBdev1 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 BaseBdev2_malloc 00:11:27.560 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 true 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 [2024-11-28 18:51:57.003057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:27.561 [2024-11-28 18:51:57.003105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.561 [2024-11-28 18:51:57.003121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:27.561 [2024-11-28 18:51:57.003132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.561 [2024-11-28 18:51:57.005236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.561 [2024-11-28 18:51:57.005275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.561 BaseBdev2 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 BaseBdev3_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 true 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 [2024-11-28 18:51:57.043480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:27.561 [2024-11-28 18:51:57.043524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.561 [2024-11-28 18:51:57.043539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.561 [2024-11-28 18:51:57.043549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.561 [2024-11-28 18:51:57.045567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.561 [2024-11-28 18:51:57.045606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:27.561 BaseBdev3 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 BaseBdev4_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 true 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 [2024-11-28 18:51:57.101625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:27.561 [2024-11-28 18:51:57.101725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.561 [2024-11-28 18:51:57.101764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:27.561 [2024-11-28 18:51:57.101796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.561 [2024-11-28 18:51:57.103976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.561 [2024-11-28 18:51:57.104055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:27.561 BaseBdev4 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 [2024-11-28 18:51:57.113675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.561 [2024-11-28 18:51:57.115490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.561 [2024-11-28 18:51:57.115594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.561 [2024-11-28 18:51:57.115693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.561 [2024-11-28 18:51:57.115935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:27.561 [2024-11-28 18:51:57.115984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.561 [2024-11-28 18:51:57.116234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:27.561 [2024-11-28 18:51:57.116407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:27.561 [2024-11-28 18:51:57.116464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:27.561 [2024-11-28 18:51:57.116630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.561 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.833 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.833 "name": "raid_bdev1", 00:11:27.833 "uuid": "b70cc2bc-1fbd-492e-9086-6d9c408ab433", 00:11:27.833 "strip_size_kb": 0, 00:11:27.833 "state": "online", 00:11:27.833 "raid_level": "raid1", 00:11:27.833 "superblock": true, 00:11:27.833 "num_base_bdevs": 4, 00:11:27.833 "num_base_bdevs_discovered": 4, 00:11:27.833 "num_base_bdevs_operational": 4, 00:11:27.833 "base_bdevs_list": [ 00:11:27.833 { 00:11:27.833 "name": "BaseBdev1", 00:11:27.833 "uuid": "d4cf0235-5a92-5d6e-97ac-344c421f9822", 00:11:27.833 "is_configured": true, 00:11:27.833 "data_offset": 2048, 00:11:27.833 "data_size": 63488 00:11:27.833 }, 00:11:27.833 { 00:11:27.833 "name": "BaseBdev2", 00:11:27.833 "uuid": "518e441b-2e51-5dad-b2c4-6d8aad6d38a8", 00:11:27.833 "is_configured": true, 00:11:27.833 "data_offset": 2048, 00:11:27.833 "data_size": 63488 00:11:27.833 }, 00:11:27.833 { 00:11:27.833 "name": "BaseBdev3", 00:11:27.833 "uuid": "ba3b080b-fa4a-51b5-a748-326de0a09a13", 00:11:27.833 "is_configured": true, 00:11:27.833 "data_offset": 2048, 00:11:27.833 "data_size": 63488 00:11:27.833 }, 00:11:27.833 { 00:11:27.833 "name": "BaseBdev4", 00:11:27.833 "uuid": "0a483ca6-aae5-50fc-b5fc-e791d98c4c3d", 00:11:27.833 "is_configured": true, 00:11:27.833 "data_offset": 2048, 00:11:27.833 "data_size": 63488 00:11:27.833 } 00:11:27.833 ] 00:11:27.833 }' 00:11:27.833 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.833 18:51:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:28.105 18:51:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:28.105 [2024-11-28 18:51:57.602124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.045 "name": "raid_bdev1", 00:11:29.045 "uuid": "b70cc2bc-1fbd-492e-9086-6d9c408ab433", 00:11:29.045 "strip_size_kb": 0, 00:11:29.045 "state": "online", 00:11:29.045 "raid_level": "raid1", 00:11:29.045 "superblock": true, 00:11:29.045 "num_base_bdevs": 4, 00:11:29.045 "num_base_bdevs_discovered": 4, 00:11:29.045 "num_base_bdevs_operational": 4, 00:11:29.045 "base_bdevs_list": [ 00:11:29.045 { 00:11:29.045 "name": "BaseBdev1", 00:11:29.045 "uuid": "d4cf0235-5a92-5d6e-97ac-344c421f9822", 00:11:29.045 "is_configured": true, 00:11:29.045 "data_offset": 2048, 00:11:29.045 "data_size": 63488 00:11:29.045 }, 00:11:29.045 { 00:11:29.045 "name": "BaseBdev2", 00:11:29.045 "uuid": "518e441b-2e51-5dad-b2c4-6d8aad6d38a8", 00:11:29.045 "is_configured": true, 00:11:29.045 "data_offset": 2048, 00:11:29.045 "data_size": 63488 00:11:29.045 }, 00:11:29.045 { 00:11:29.045 "name": "BaseBdev3", 00:11:29.045 "uuid": "ba3b080b-fa4a-51b5-a748-326de0a09a13", 00:11:29.045 "is_configured": true, 00:11:29.045 "data_offset": 2048, 00:11:29.045 "data_size": 63488 00:11:29.045 }, 00:11:29.045 { 00:11:29.045 "name": "BaseBdev4", 00:11:29.045 "uuid": "0a483ca6-aae5-50fc-b5fc-e791d98c4c3d", 00:11:29.045 "is_configured": true, 00:11:29.045 "data_offset": 2048, 00:11:29.045 "data_size": 63488 00:11:29.045 } 00:11:29.045 ] 00:11:29.045 }' 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.045 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.615 18:51:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.615 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.615 18:51:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.615 [2024-11-28 18:51:59.001341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.615 [2024-11-28 18:51:59.001379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.615 [2024-11-28 18:51:59.004059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.615 [2024-11-28 18:51:59.004131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.615 [2024-11-28 18:51:59.004248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.615 [2024-11-28 18:51:59.004261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:29.615 { 00:11:29.615 "results": [ 00:11:29.615 { 00:11:29.615 "job": "raid_bdev1", 00:11:29.615 "core_mask": "0x1", 00:11:29.615 "workload": "randrw", 00:11:29.615 "percentage": 50, 00:11:29.615 "status": "finished", 00:11:29.615 "queue_depth": 1, 00:11:29.615 "io_size": 131072, 00:11:29.615 "runtime": 1.397409, 00:11:29.615 "iops": 11742.446198643347, 00:11:29.615 "mibps": 1467.8057748304184, 00:11:29.615 "io_failed": 0, 00:11:29.615 "io_timeout": 0, 00:11:29.615 "avg_latency_us": 82.62887835229253, 00:11:29.615 "min_latency_us": 22.647956070774864, 00:11:29.615 "max_latency_us": 1356.646038525233 00:11:29.615 } 00:11:29.615 ], 00:11:29.615 "core_count": 1 00:11:29.615 } 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87240 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87240 ']' 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87240 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87240 00:11:29.615 killing process with pid 87240 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87240' 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87240 00:11:29.615 [2024-11-28 18:51:59.046613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.615 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87240 00:11:29.615 [2024-11-28 18:51:59.083328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dZrxN3tsJA 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.875 ************************************ 00:11:29.875 END TEST raid_read_error_test 00:11:29.875 ************************************ 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:29.875 00:11:29.875 real 0m3.298s 00:11:29.875 user 0m4.133s 00:11:29.875 sys 0m0.541s 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.875 18:51:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.875 18:51:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:29.875 18:51:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.875 18:51:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.875 18:51:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.875 ************************************ 00:11:29.875 START TEST raid_write_error_test 00:11:29.875 ************************************ 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.875 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wlx2OR8mDW 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87369 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87369 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87369 ']' 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.876 18:51:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 [2024-11-28 18:51:59.481214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:30.136 [2024-11-28 18:51:59.481330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87369 ] 00:11:30.136 [2024-11-28 18:51:59.617386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:30.136 [2024-11-28 18:51:59.655762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.136 [2024-11-28 18:51:59.682650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.136 [2024-11-28 18:51:59.726358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.136 [2024-11-28 18:51:59.726399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.705 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 BaseBdev1_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 true 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 [2024-11-28 18:52:00.336412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.966 [2024-11-28 18:52:00.336483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.966 [2024-11-28 18:52:00.336505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.966 [2024-11-28 18:52:00.336521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.966 [2024-11-28 18:52:00.338652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.966 [2024-11-28 18:52:00.338690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.966 BaseBdev1 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 BaseBdev2_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 true 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 [2024-11-28 18:52:00.377350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.966 [2024-11-28 18:52:00.377401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.966 [2024-11-28 18:52:00.377417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.966 [2024-11-28 18:52:00.377437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.966 [2024-11-28 18:52:00.379487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.966 [2024-11-28 18:52:00.379522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.966 BaseBdev2 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 BaseBdev3_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 true 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 [2024-11-28 18:52:00.418096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.966 [2024-11-28 18:52:00.418149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.966 [2024-11-28 18:52:00.418167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.966 [2024-11-28 18:52:00.418177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.966 [2024-11-28 18:52:00.420248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.966 [2024-11-28 18:52:00.420287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.966 BaseBdev3 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.966 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.967 BaseBdev4_malloc 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.967 true 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.967 [2024-11-28 18:52:00.473415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.967 [2024-11-28 18:52:00.473495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.967 [2024-11-28 18:52:00.473513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.967 [2024-11-28 18:52:00.473524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.967 [2024-11-28 18:52:00.475554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.967 [2024-11-28 18:52:00.475592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.967 BaseBdev4 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.967 [2024-11-28 18:52:00.485464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.967 [2024-11-28 18:52:00.487284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.967 [2024-11-28 18:52:00.487357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.967 [2024-11-28 18:52:00.487409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.967 [2024-11-28 18:52:00.487642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.967 [2024-11-28 18:52:00.487664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.967 [2024-11-28 18:52:00.487913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:30.967 [2024-11-28 18:52:00.488064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.967 [2024-11-28 18:52:00.488079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:30.967 [2024-11-28 18:52:00.488197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.967 "name": "raid_bdev1", 00:11:30.967 "uuid": "c7a9616d-5e3c-4190-a276-c96723850b1f", 00:11:30.967 "strip_size_kb": 0, 00:11:30.967 "state": "online", 00:11:30.967 "raid_level": "raid1", 00:11:30.967 "superblock": true, 00:11:30.967 "num_base_bdevs": 4, 00:11:30.967 "num_base_bdevs_discovered": 4, 00:11:30.967 "num_base_bdevs_operational": 4, 00:11:30.967 "base_bdevs_list": [ 00:11:30.967 { 00:11:30.967 "name": "BaseBdev1", 00:11:30.967 "uuid": "074ad2bb-4b97-5472-b6ed-5ef0c1a1b315", 00:11:30.967 "is_configured": true, 00:11:30.967 "data_offset": 2048, 00:11:30.967 "data_size": 63488 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "name": "BaseBdev2", 00:11:30.967 "uuid": "c906524f-52d9-5be3-9e67-521983ec7603", 00:11:30.967 "is_configured": true, 00:11:30.967 "data_offset": 2048, 00:11:30.967 "data_size": 63488 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "name": "BaseBdev3", 00:11:30.967 "uuid": "ba86e252-1ecc-59a8-852a-3efd06407611", 00:11:30.967 "is_configured": true, 00:11:30.967 "data_offset": 2048, 00:11:30.967 "data_size": 63488 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "name": "BaseBdev4", 00:11:30.967 "uuid": "324c1918-1e36-54cd-822e-30cfefd03c22", 00:11:30.967 "is_configured": true, 00:11:30.967 "data_offset": 2048, 00:11:30.967 "data_size": 63488 00:11:30.967 } 00:11:30.967 ] 00:11:30.967 }' 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.967 18:52:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.537 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.537 18:52:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.537 [2024-11-28 18:52:01.050001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:32.476 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:32.476 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.477 [2024-11-28 18:52:01.970672] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:32.477 [2024-11-28 18:52:01.970727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.477 [2024-11-28 18:52:01.970962] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006e50 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.477 18:52:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.477 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.477 "name": "raid_bdev1", 00:11:32.477 "uuid": "c7a9616d-5e3c-4190-a276-c96723850b1f", 00:11:32.477 "strip_size_kb": 0, 00:11:32.477 "state": "online", 00:11:32.477 "raid_level": "raid1", 00:11:32.477 "superblock": true, 00:11:32.477 "num_base_bdevs": 4, 00:11:32.477 "num_base_bdevs_discovered": 3, 00:11:32.477 "num_base_bdevs_operational": 3, 00:11:32.477 "base_bdevs_list": [ 00:11:32.477 { 00:11:32.477 "name": null, 00:11:32.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.477 "is_configured": false, 00:11:32.477 "data_offset": 0, 00:11:32.477 "data_size": 63488 00:11:32.477 }, 00:11:32.477 { 00:11:32.477 "name": "BaseBdev2", 00:11:32.477 "uuid": "c906524f-52d9-5be3-9e67-521983ec7603", 00:11:32.477 "is_configured": true, 00:11:32.477 "data_offset": 2048, 00:11:32.477 "data_size": 63488 00:11:32.477 }, 00:11:32.477 { 00:11:32.477 "name": "BaseBdev3", 00:11:32.477 "uuid": "ba86e252-1ecc-59a8-852a-3efd06407611", 00:11:32.477 "is_configured": true, 00:11:32.477 "data_offset": 2048, 00:11:32.477 "data_size": 63488 00:11:32.477 }, 00:11:32.477 { 00:11:32.477 "name": "BaseBdev4", 00:11:32.477 "uuid": "324c1918-1e36-54cd-822e-30cfefd03c22", 00:11:32.477 "is_configured": true, 00:11:32.477 "data_offset": 2048, 00:11:32.477 "data_size": 63488 00:11:32.477 } 00:11:32.477 ] 00:11:32.477 }' 00:11:32.477 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.477 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.047 [2024-11-28 18:52:02.423142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.047 [2024-11-28 18:52:02.423186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.047 [2024-11-28 18:52:02.425859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.047 [2024-11-28 18:52:02.425908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.047 [2024-11-28 18:52:02.426006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.047 [2024-11-28 18:52:02.426022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.047 { 00:11:33.047 "results": [ 00:11:33.047 { 00:11:33.047 "job": "raid_bdev1", 00:11:33.047 "core_mask": "0x1", 00:11:33.047 "workload": "randrw", 00:11:33.047 "percentage": 50, 00:11:33.047 "status": "finished", 00:11:33.047 "queue_depth": 1, 00:11:33.047 "io_size": 131072, 00:11:33.047 "runtime": 1.37143, 00:11:33.047 "iops": 12474.57033898923, 00:11:33.047 "mibps": 1559.3212923736537, 00:11:33.047 "io_failed": 0, 00:11:33.047 "io_timeout": 0, 00:11:33.047 "avg_latency_us": 77.61278113168119, 00:11:33.047 "min_latency_us": 22.536389784711933, 00:11:33.047 "max_latency_us": 1335.2253116011505 00:11:33.047 } 00:11:33.047 ], 00:11:33.047 "core_count": 1 00:11:33.047 } 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87369 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87369 ']' 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87369 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87369 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.047 killing process with pid 87369 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87369' 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87369 00:11:33.047 [2024-11-28 18:52:02.464153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.047 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87369 00:11:33.047 [2024-11-28 18:52:02.500304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.306 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wlx2OR8mDW 00:11:33.306 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:33.307 00:11:33.307 real 0m3.346s 00:11:33.307 user 0m4.220s 00:11:33.307 sys 0m0.558s 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.307 18:52:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.307 ************************************ 00:11:33.307 END TEST raid_write_error_test 00:11:33.307 ************************************ 00:11:33.307 18:52:02 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:33.307 18:52:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:33.307 18:52:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:33.307 18:52:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:33.307 18:52:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.307 18:52:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.307 ************************************ 00:11:33.307 START TEST raid_rebuild_test 00:11:33.307 ************************************ 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87500 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87500 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87500 ']' 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.307 18:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.307 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:33.307 Zero copy mechanism will not be used. 00:11:33.307 [2024-11-28 18:52:02.892594] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:33.307 [2024-11-28 18:52:02.892722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87500 ] 00:11:33.567 [2024-11-28 18:52:03.027784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:33.567 [2024-11-28 18:52:03.066819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.567 [2024-11-28 18:52:03.092579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.567 [2024-11-28 18:52:03.135327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.567 [2024-11-28 18:52:03.135383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.136 BaseBdev1_malloc 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.136 [2024-11-28 18:52:03.728472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:34.136 [2024-11-28 18:52:03.728538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.136 [2024-11-28 18:52:03.728565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:34.136 [2024-11-28 18:52:03.728579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.136 [2024-11-28 18:52:03.730670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.136 [2024-11-28 18:52:03.730707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.136 BaseBdev1 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.136 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 BaseBdev2_malloc 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 [2024-11-28 18:52:03.757197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:34.396 [2024-11-28 18:52:03.757250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.396 [2024-11-28 18:52:03.757267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:34.396 [2024-11-28 18:52:03.757277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.396 [2024-11-28 18:52:03.759304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.396 [2024-11-28 18:52:03.759342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:34.396 BaseBdev2 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 spare_malloc 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 spare_delay 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 [2024-11-28 18:52:03.797813] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:34.396 [2024-11-28 18:52:03.797880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.396 [2024-11-28 18:52:03.797899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:34.396 [2024-11-28 18:52:03.797910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.396 [2024-11-28 18:52:03.799978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.396 [2024-11-28 18:52:03.800019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:34.396 spare 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.396 [2024-11-28 18:52:03.809879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.396 [2024-11-28 18:52:03.811705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.396 [2024-11-28 18:52:03.811802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:34.396 [2024-11-28 18:52:03.811814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:34.396 [2024-11-28 18:52:03.812047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:34.396 [2024-11-28 18:52:03.812188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:34.396 [2024-11-28 18:52:03.812202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:34.396 [2024-11-28 18:52:03.812329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.396 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.397 "name": "raid_bdev1", 00:11:34.397 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:34.397 "strip_size_kb": 0, 00:11:34.397 "state": "online", 00:11:34.397 "raid_level": "raid1", 00:11:34.397 "superblock": false, 00:11:34.397 "num_base_bdevs": 2, 00:11:34.397 "num_base_bdevs_discovered": 2, 00:11:34.397 "num_base_bdevs_operational": 2, 00:11:34.397 "base_bdevs_list": [ 00:11:34.397 { 00:11:34.397 "name": "BaseBdev1", 00:11:34.397 "uuid": "7ea4fda5-d047-5abc-830a-6761ed19a02b", 00:11:34.397 "is_configured": true, 00:11:34.397 "data_offset": 0, 00:11:34.397 "data_size": 65536 00:11:34.397 }, 00:11:34.397 { 00:11:34.397 "name": "BaseBdev2", 00:11:34.397 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:34.397 "is_configured": true, 00:11:34.397 "data_offset": 0, 00:11:34.397 "data_size": 65536 00:11:34.397 } 00:11:34.397 ] 00:11:34.397 }' 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.397 18:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:34.708 [2024-11-28 18:52:04.206235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:34.708 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:34.967 [2024-11-28 18:52:04.462108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:34.967 /dev/nbd0 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.967 1+0 records in 00:11:34.967 1+0 records out 00:11:34.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406716 s, 10.1 MB/s 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:34.967 18:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:40.244 65536+0 records in 00:11:40.244 65536+0 records out 00:11:40.244 33554432 bytes (34 MB, 32 MiB) copied, 4.82248 s, 7.0 MB/s 00:11:40.244 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:40.244 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:40.244 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:40.245 [2024-11-28 18:52:09.548550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.245 [2024-11-28 18:52:09.580585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.245 "name": "raid_bdev1", 00:11:40.245 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:40.245 "strip_size_kb": 0, 00:11:40.245 "state": "online", 00:11:40.245 "raid_level": "raid1", 00:11:40.245 "superblock": false, 00:11:40.245 "num_base_bdevs": 2, 00:11:40.245 "num_base_bdevs_discovered": 1, 00:11:40.245 "num_base_bdevs_operational": 1, 00:11:40.245 "base_bdevs_list": [ 00:11:40.245 { 00:11:40.245 "name": null, 00:11:40.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.245 "is_configured": false, 00:11:40.245 "data_offset": 0, 00:11:40.245 "data_size": 65536 00:11:40.245 }, 00:11:40.245 { 00:11:40.245 "name": "BaseBdev2", 00:11:40.245 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:40.245 "is_configured": true, 00:11:40.245 "data_offset": 0, 00:11:40.245 "data_size": 65536 00:11:40.245 } 00:11:40.245 ] 00:11:40.245 }' 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.245 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 18:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.505 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.505 18:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 [2024-11-28 18:52:10.004667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.505 [2024-11-28 18:52:10.009764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:11:40.505 18:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.505 18:52:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:40.505 [2024-11-28 18:52:10.011650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.444 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.704 "name": "raid_bdev1", 00:11:41.704 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:41.704 "strip_size_kb": 0, 00:11:41.704 "state": "online", 00:11:41.704 "raid_level": "raid1", 00:11:41.704 "superblock": false, 00:11:41.704 "num_base_bdevs": 2, 00:11:41.704 "num_base_bdevs_discovered": 2, 00:11:41.704 "num_base_bdevs_operational": 2, 00:11:41.704 "process": { 00:11:41.704 "type": "rebuild", 00:11:41.704 "target": "spare", 00:11:41.704 "progress": { 00:11:41.704 "blocks": 20480, 00:11:41.704 "percent": 31 00:11:41.704 } 00:11:41.704 }, 00:11:41.704 "base_bdevs_list": [ 00:11:41.704 { 00:11:41.704 "name": "spare", 00:11:41.704 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:41.704 "is_configured": true, 00:11:41.704 "data_offset": 0, 00:11:41.704 "data_size": 65536 00:11:41.704 }, 00:11:41.704 { 00:11:41.704 "name": "BaseBdev2", 00:11:41.704 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:41.704 "is_configured": true, 00:11:41.704 "data_offset": 0, 00:11:41.704 "data_size": 65536 00:11:41.704 } 00:11:41.704 ] 00:11:41.704 }' 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 [2024-11-28 18:52:11.142113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.704 [2024-11-28 18:52:11.218596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:41.704 [2024-11-28 18:52:11.218698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.704 [2024-11-28 18:52:11.218731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.704 [2024-11-28 18:52:11.218753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.704 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.704 "name": "raid_bdev1", 00:11:41.704 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:41.704 "strip_size_kb": 0, 00:11:41.704 "state": "online", 00:11:41.704 "raid_level": "raid1", 00:11:41.704 "superblock": false, 00:11:41.704 "num_base_bdevs": 2, 00:11:41.704 "num_base_bdevs_discovered": 1, 00:11:41.704 "num_base_bdevs_operational": 1, 00:11:41.704 "base_bdevs_list": [ 00:11:41.704 { 00:11:41.704 "name": null, 00:11:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.705 "is_configured": false, 00:11:41.705 "data_offset": 0, 00:11:41.705 "data_size": 65536 00:11:41.705 }, 00:11:41.705 { 00:11:41.705 "name": "BaseBdev2", 00:11:41.705 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:41.705 "is_configured": true, 00:11:41.705 "data_offset": 0, 00:11:41.705 "data_size": 65536 00:11:41.705 } 00:11:41.705 ] 00:11:41.705 }' 00:11:41.705 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.705 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.273 "name": "raid_bdev1", 00:11:42.273 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:42.273 "strip_size_kb": 0, 00:11:42.273 "state": "online", 00:11:42.273 "raid_level": "raid1", 00:11:42.273 "superblock": false, 00:11:42.273 "num_base_bdevs": 2, 00:11:42.273 "num_base_bdevs_discovered": 1, 00:11:42.273 "num_base_bdevs_operational": 1, 00:11:42.273 "base_bdevs_list": [ 00:11:42.273 { 00:11:42.273 "name": null, 00:11:42.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.273 "is_configured": false, 00:11:42.273 "data_offset": 0, 00:11:42.273 "data_size": 65536 00:11:42.273 }, 00:11:42.273 { 00:11:42.273 "name": "BaseBdev2", 00:11:42.273 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:42.273 "is_configured": true, 00:11:42.273 "data_offset": 0, 00:11:42.273 "data_size": 65536 00:11:42.273 } 00:11:42.273 ] 00:11:42.273 }' 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.273 [2024-11-28 18:52:11.755675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.273 [2024-11-28 18:52:11.760400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.273 18:52:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:42.273 [2024-11-28 18:52:11.762178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.212 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.479 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.479 "name": "raid_bdev1", 00:11:43.479 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:43.479 "strip_size_kb": 0, 00:11:43.479 "state": "online", 00:11:43.479 "raid_level": "raid1", 00:11:43.479 "superblock": false, 00:11:43.479 "num_base_bdevs": 2, 00:11:43.479 "num_base_bdevs_discovered": 2, 00:11:43.479 "num_base_bdevs_operational": 2, 00:11:43.479 "process": { 00:11:43.479 "type": "rebuild", 00:11:43.479 "target": "spare", 00:11:43.479 "progress": { 00:11:43.479 "blocks": 20480, 00:11:43.479 "percent": 31 00:11:43.479 } 00:11:43.479 }, 00:11:43.479 "base_bdevs_list": [ 00:11:43.479 { 00:11:43.479 "name": "spare", 00:11:43.479 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:43.479 "is_configured": true, 00:11:43.479 "data_offset": 0, 00:11:43.479 "data_size": 65536 00:11:43.479 }, 00:11:43.479 { 00:11:43.479 "name": "BaseBdev2", 00:11:43.480 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:43.480 "is_configured": true, 00:11:43.480 "data_offset": 0, 00:11:43.480 "data_size": 65536 00:11:43.480 } 00:11:43.480 ] 00:11:43.480 }' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=285 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.480 "name": "raid_bdev1", 00:11:43.480 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:43.480 "strip_size_kb": 0, 00:11:43.480 "state": "online", 00:11:43.480 "raid_level": "raid1", 00:11:43.480 "superblock": false, 00:11:43.480 "num_base_bdevs": 2, 00:11:43.480 "num_base_bdevs_discovered": 2, 00:11:43.480 "num_base_bdevs_operational": 2, 00:11:43.480 "process": { 00:11:43.480 "type": "rebuild", 00:11:43.480 "target": "spare", 00:11:43.480 "progress": { 00:11:43.480 "blocks": 22528, 00:11:43.480 "percent": 34 00:11:43.480 } 00:11:43.480 }, 00:11:43.480 "base_bdevs_list": [ 00:11:43.480 { 00:11:43.480 "name": "spare", 00:11:43.480 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:43.480 "is_configured": true, 00:11:43.480 "data_offset": 0, 00:11:43.480 "data_size": 65536 00:11:43.480 }, 00:11:43.480 { 00:11:43.480 "name": "BaseBdev2", 00:11:43.480 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:43.480 "is_configured": true, 00:11:43.480 "data_offset": 0, 00:11:43.480 "data_size": 65536 00:11:43.480 } 00:11:43.480 ] 00:11:43.480 }' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.480 18:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.480 18:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.480 18:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.480 18:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.875 "name": "raid_bdev1", 00:11:44.875 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:44.875 "strip_size_kb": 0, 00:11:44.875 "state": "online", 00:11:44.875 "raid_level": "raid1", 00:11:44.875 "superblock": false, 00:11:44.875 "num_base_bdevs": 2, 00:11:44.875 "num_base_bdevs_discovered": 2, 00:11:44.875 "num_base_bdevs_operational": 2, 00:11:44.875 "process": { 00:11:44.875 "type": "rebuild", 00:11:44.875 "target": "spare", 00:11:44.875 "progress": { 00:11:44.875 "blocks": 45056, 00:11:44.875 "percent": 68 00:11:44.875 } 00:11:44.875 }, 00:11:44.875 "base_bdevs_list": [ 00:11:44.875 { 00:11:44.875 "name": "spare", 00:11:44.875 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 0, 00:11:44.875 "data_size": 65536 00:11:44.875 }, 00:11:44.875 { 00:11:44.875 "name": "BaseBdev2", 00:11:44.875 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:44.875 "is_configured": true, 00:11:44.875 "data_offset": 0, 00:11:44.875 "data_size": 65536 00:11:44.875 } 00:11:44.875 ] 00:11:44.875 }' 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.875 18:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.445 [2024-11-28 18:52:14.978806] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:45.445 [2024-11-28 18:52:14.978925] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:45.445 [2024-11-28 18:52:14.978974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.703 "name": "raid_bdev1", 00:11:45.703 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:45.703 "strip_size_kb": 0, 00:11:45.703 "state": "online", 00:11:45.703 "raid_level": "raid1", 00:11:45.703 "superblock": false, 00:11:45.703 "num_base_bdevs": 2, 00:11:45.703 "num_base_bdevs_discovered": 2, 00:11:45.703 "num_base_bdevs_operational": 2, 00:11:45.703 "base_bdevs_list": [ 00:11:45.703 { 00:11:45.703 "name": "spare", 00:11:45.703 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:45.703 "is_configured": true, 00:11:45.703 "data_offset": 0, 00:11:45.703 "data_size": 65536 00:11:45.703 }, 00:11:45.703 { 00:11:45.703 "name": "BaseBdev2", 00:11:45.703 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:45.703 "is_configured": true, 00:11:45.703 "data_offset": 0, 00:11:45.703 "data_size": 65536 00:11:45.703 } 00:11:45.703 ] 00:11:45.703 }' 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.703 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.961 "name": "raid_bdev1", 00:11:45.961 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:45.961 "strip_size_kb": 0, 00:11:45.961 "state": "online", 00:11:45.961 "raid_level": "raid1", 00:11:45.961 "superblock": false, 00:11:45.961 "num_base_bdevs": 2, 00:11:45.961 "num_base_bdevs_discovered": 2, 00:11:45.961 "num_base_bdevs_operational": 2, 00:11:45.961 "base_bdevs_list": [ 00:11:45.961 { 00:11:45.961 "name": "spare", 00:11:45.961 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:45.961 "is_configured": true, 00:11:45.961 "data_offset": 0, 00:11:45.961 "data_size": 65536 00:11:45.961 }, 00:11:45.961 { 00:11:45.961 "name": "BaseBdev2", 00:11:45.961 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:45.961 "is_configured": true, 00:11:45.961 "data_offset": 0, 00:11:45.961 "data_size": 65536 00:11:45.961 } 00:11:45.961 ] 00:11:45.961 }' 00:11:45.961 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.962 "name": "raid_bdev1", 00:11:45.962 "uuid": "46735883-b114-4b56-a3cc-b72648f57dd0", 00:11:45.962 "strip_size_kb": 0, 00:11:45.962 "state": "online", 00:11:45.962 "raid_level": "raid1", 00:11:45.962 "superblock": false, 00:11:45.962 "num_base_bdevs": 2, 00:11:45.962 "num_base_bdevs_discovered": 2, 00:11:45.962 "num_base_bdevs_operational": 2, 00:11:45.962 "base_bdevs_list": [ 00:11:45.962 { 00:11:45.962 "name": "spare", 00:11:45.962 "uuid": "3f199c04-6bce-5b34-bd63-a27033e019a0", 00:11:45.962 "is_configured": true, 00:11:45.962 "data_offset": 0, 00:11:45.962 "data_size": 65536 00:11:45.962 }, 00:11:45.962 { 00:11:45.962 "name": "BaseBdev2", 00:11:45.962 "uuid": "70d3fb1f-1321-5347-acba-61db8005541e", 00:11:45.962 "is_configured": true, 00:11:45.962 "data_offset": 0, 00:11:45.962 "data_size": 65536 00:11:45.962 } 00:11:45.962 ] 00:11:45.962 }' 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.962 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.531 [2024-11-28 18:52:15.887638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.531 [2024-11-28 18:52:15.887714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.531 [2024-11-28 18:52:15.887810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.531 [2024-11-28 18:52:15.887902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.531 [2024-11-28 18:52:15.887947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.531 18:52:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:46.531 /dev/nbd0 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.791 1+0 records in 00:11:46.791 1+0 records out 00:11:46.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004489 s, 9.1 MB/s 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:46.791 /dev/nbd1 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:46.791 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.052 1+0 records in 00:11:47.052 1+0 records out 00:11:47.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045355 s, 9.0 MB/s 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.052 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87500 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87500 ']' 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87500 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.313 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87500 00:11:47.573 killing process with pid 87500 00:11:47.573 Received shutdown signal, test time was about 60.000000 seconds 00:11:47.573 00:11:47.573 Latency(us) 00:11:47.573 [2024-11-28T18:52:17.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.573 [2024-11-28T18:52:17.179Z] =================================================================================================================== 00:11:47.573 [2024-11-28T18:52:17.179Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:47.573 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.573 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.573 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87500' 00:11:47.573 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87500 00:11:47.573 [2024-11-28 18:52:16.937618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.573 18:52:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87500 00:11:47.573 [2024-11-28 18:52:16.968084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.573 18:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:47.573 00:11:47.573 real 0m14.379s 00:11:47.573 user 0m15.610s 00:11:47.573 sys 0m3.157s 00:11:47.573 ************************************ 00:11:47.573 END TEST raid_rebuild_test 00:11:47.573 ************************************ 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 18:52:17 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:47.833 18:52:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:47.833 18:52:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.833 18:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 ************************************ 00:11:47.833 START TEST raid_rebuild_test_sb 00:11:47.833 ************************************ 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=87907 00:11:47.833 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 87907 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 87907 ']' 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.834 18:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.834 [2024-11-28 18:52:17.357126] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:47.834 [2024-11-28 18:52:17.357336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:47.834 Zero copy mechanism will not be used. 00:11:47.834 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87907 ] 00:11:48.093 [2024-11-28 18:52:17.496176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:48.093 [2024-11-28 18:52:17.534699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.093 [2024-11-28 18:52:17.560397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.093 [2024-11-28 18:52:17.602682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.093 [2024-11-28 18:52:17.602786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 BaseBdev1_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 [2024-11-28 18:52:18.179263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:48.700 [2024-11-28 18:52:18.179330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.700 [2024-11-28 18:52:18.179355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:48.700 [2024-11-28 18:52:18.179370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.700 [2024-11-28 18:52:18.181576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.700 [2024-11-28 18:52:18.181616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.700 BaseBdev1 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 BaseBdev2_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 [2024-11-28 18:52:18.207897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:48.700 [2024-11-28 18:52:18.207948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.700 [2024-11-28 18:52:18.207966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:48.700 [2024-11-28 18:52:18.207976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.700 [2024-11-28 18:52:18.210113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.700 [2024-11-28 18:52:18.210151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.700 BaseBdev2 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 spare_malloc 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 spare_delay 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 [2024-11-28 18:52:18.248557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:48.700 [2024-11-28 18:52:18.248611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.700 [2024-11-28 18:52:18.248629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:48.700 [2024-11-28 18:52:18.248643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.700 [2024-11-28 18:52:18.250762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.700 [2024-11-28 18:52:18.250803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:48.700 spare 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 [2024-11-28 18:52:18.260632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.700 [2024-11-28 18:52:18.262474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.700 [2024-11-28 18:52:18.262621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:48.700 [2024-11-28 18:52:18.262637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.700 [2024-11-28 18:52:18.262896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:48.700 [2024-11-28 18:52:18.263053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:48.700 [2024-11-28 18:52:18.263063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:48.700 [2024-11-28 18:52:18.263182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.700 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.960 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.960 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.960 "name": "raid_bdev1", 00:11:48.960 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:48.960 "strip_size_kb": 0, 00:11:48.960 "state": "online", 00:11:48.960 "raid_level": "raid1", 00:11:48.960 "superblock": true, 00:11:48.960 "num_base_bdevs": 2, 00:11:48.960 "num_base_bdevs_discovered": 2, 00:11:48.960 "num_base_bdevs_operational": 2, 00:11:48.960 "base_bdevs_list": [ 00:11:48.960 { 00:11:48.960 "name": "BaseBdev1", 00:11:48.960 "uuid": "31774a9c-6007-5b85-9a5c-4e3652baf463", 00:11:48.960 "is_configured": true, 00:11:48.960 "data_offset": 2048, 00:11:48.960 "data_size": 63488 00:11:48.960 }, 00:11:48.960 { 00:11:48.960 "name": "BaseBdev2", 00:11:48.960 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:48.960 "is_configured": true, 00:11:48.960 "data_offset": 2048, 00:11:48.960 "data_size": 63488 00:11:48.960 } 00:11:48.960 ] 00:11:48.960 }' 00:11:48.960 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.960 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.219 [2024-11-28 18:52:18.737065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.219 18:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:49.480 [2024-11-28 18:52:18.964862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:49.480 /dev/nbd0 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.480 1+0 records in 00:11:49.480 1+0 records out 00:11:49.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245363 s, 16.7 MB/s 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:49.480 18:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:53.679 63488+0 records in 00:11:53.679 63488+0 records out 00:11:53.679 32505856 bytes (33 MB, 31 MiB) copied, 4.10726 s, 7.9 MB/s 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.679 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:53.939 [2024-11-28 18:52:23.359649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.939 [2024-11-28 18:52:23.391786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.939 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.940 "name": "raid_bdev1", 00:11:53.940 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:53.940 "strip_size_kb": 0, 00:11:53.940 "state": "online", 00:11:53.940 "raid_level": "raid1", 00:11:53.940 "superblock": true, 00:11:53.940 "num_base_bdevs": 2, 00:11:53.940 "num_base_bdevs_discovered": 1, 00:11:53.940 "num_base_bdevs_operational": 1, 00:11:53.940 "base_bdevs_list": [ 00:11:53.940 { 00:11:53.940 "name": null, 00:11:53.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.940 "is_configured": false, 00:11:53.940 "data_offset": 0, 00:11:53.940 "data_size": 63488 00:11:53.940 }, 00:11:53.940 { 00:11:53.940 "name": "BaseBdev2", 00:11:53.940 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:53.940 "is_configured": true, 00:11:53.940 "data_offset": 2048, 00:11:53.940 "data_size": 63488 00:11:53.940 } 00:11:53.940 ] 00:11:53.940 }' 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.940 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.199 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:54.199 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.199 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.459 [2024-11-28 18:52:23.803860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.459 [2024-11-28 18:52:23.809086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:11:54.459 18:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.459 18:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:54.459 [2024-11-28 18:52:23.810987] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.397 "name": "raid_bdev1", 00:11:55.397 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:55.397 "strip_size_kb": 0, 00:11:55.397 "state": "online", 00:11:55.397 "raid_level": "raid1", 00:11:55.397 "superblock": true, 00:11:55.397 "num_base_bdevs": 2, 00:11:55.397 "num_base_bdevs_discovered": 2, 00:11:55.397 "num_base_bdevs_operational": 2, 00:11:55.397 "process": { 00:11:55.397 "type": "rebuild", 00:11:55.397 "target": "spare", 00:11:55.397 "progress": { 00:11:55.397 "blocks": 20480, 00:11:55.397 "percent": 32 00:11:55.397 } 00:11:55.397 }, 00:11:55.397 "base_bdevs_list": [ 00:11:55.397 { 00:11:55.397 "name": "spare", 00:11:55.397 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:55.397 "is_configured": true, 00:11:55.397 "data_offset": 2048, 00:11:55.397 "data_size": 63488 00:11:55.397 }, 00:11:55.397 { 00:11:55.397 "name": "BaseBdev2", 00:11:55.397 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:55.397 "is_configured": true, 00:11:55.397 "data_offset": 2048, 00:11:55.397 "data_size": 63488 00:11:55.397 } 00:11:55.397 ] 00:11:55.397 }' 00:11:55.397 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.398 18:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.398 [2024-11-28 18:52:24.918318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.656 [2024-11-28 18:52:25.017779] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:55.656 [2024-11-28 18:52:25.017883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.656 [2024-11-28 18:52:25.017916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.656 [2024-11-28 18:52:25.017938] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.656 "name": "raid_bdev1", 00:11:55.656 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:55.656 "strip_size_kb": 0, 00:11:55.656 "state": "online", 00:11:55.656 "raid_level": "raid1", 00:11:55.656 "superblock": true, 00:11:55.656 "num_base_bdevs": 2, 00:11:55.656 "num_base_bdevs_discovered": 1, 00:11:55.656 "num_base_bdevs_operational": 1, 00:11:55.656 "base_bdevs_list": [ 00:11:55.656 { 00:11:55.656 "name": null, 00:11:55.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.656 "is_configured": false, 00:11:55.656 "data_offset": 0, 00:11:55.656 "data_size": 63488 00:11:55.656 }, 00:11:55.656 { 00:11:55.656 "name": "BaseBdev2", 00:11:55.656 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:55.656 "is_configured": true, 00:11:55.656 "data_offset": 2048, 00:11:55.656 "data_size": 63488 00:11:55.656 } 00:11:55.656 ] 00:11:55.656 }' 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.656 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.915 "name": "raid_bdev1", 00:11:55.915 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:55.915 "strip_size_kb": 0, 00:11:55.915 "state": "online", 00:11:55.915 "raid_level": "raid1", 00:11:55.915 "superblock": true, 00:11:55.915 "num_base_bdevs": 2, 00:11:55.915 "num_base_bdevs_discovered": 1, 00:11:55.915 "num_base_bdevs_operational": 1, 00:11:55.915 "base_bdevs_list": [ 00:11:55.915 { 00:11:55.915 "name": null, 00:11:55.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.915 "is_configured": false, 00:11:55.915 "data_offset": 0, 00:11:55.915 "data_size": 63488 00:11:55.915 }, 00:11:55.915 { 00:11:55.915 "name": "BaseBdev2", 00:11:55.915 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:55.915 "is_configured": true, 00:11:55.915 "data_offset": 2048, 00:11:55.915 "data_size": 63488 00:11:55.915 } 00:11:55.915 ] 00:11:55.915 }' 00:11:55.915 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.174 [2024-11-28 18:52:25.606982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.174 [2024-11-28 18:52:25.611861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.174 18:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:56.174 [2024-11-28 18:52:25.613735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.114 "name": "raid_bdev1", 00:11:57.114 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:57.114 "strip_size_kb": 0, 00:11:57.114 "state": "online", 00:11:57.114 "raid_level": "raid1", 00:11:57.114 "superblock": true, 00:11:57.114 "num_base_bdevs": 2, 00:11:57.114 "num_base_bdevs_discovered": 2, 00:11:57.114 "num_base_bdevs_operational": 2, 00:11:57.114 "process": { 00:11:57.114 "type": "rebuild", 00:11:57.114 "target": "spare", 00:11:57.114 "progress": { 00:11:57.114 "blocks": 20480, 00:11:57.114 "percent": 32 00:11:57.114 } 00:11:57.114 }, 00:11:57.114 "base_bdevs_list": [ 00:11:57.114 { 00:11:57.114 "name": "spare", 00:11:57.114 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:57.114 "is_configured": true, 00:11:57.114 "data_offset": 2048, 00:11:57.114 "data_size": 63488 00:11:57.114 }, 00:11:57.114 { 00:11:57.114 "name": "BaseBdev2", 00:11:57.114 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:57.114 "is_configured": true, 00:11:57.114 "data_offset": 2048, 00:11:57.114 "data_size": 63488 00:11:57.114 } 00:11:57.114 ] 00:11:57.114 }' 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.114 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:57.374 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=299 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.374 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.374 "name": "raid_bdev1", 00:11:57.374 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:57.374 "strip_size_kb": 0, 00:11:57.374 "state": "online", 00:11:57.374 "raid_level": "raid1", 00:11:57.374 "superblock": true, 00:11:57.374 "num_base_bdevs": 2, 00:11:57.374 "num_base_bdevs_discovered": 2, 00:11:57.374 "num_base_bdevs_operational": 2, 00:11:57.374 "process": { 00:11:57.374 "type": "rebuild", 00:11:57.374 "target": "spare", 00:11:57.374 "progress": { 00:11:57.374 "blocks": 22528, 00:11:57.374 "percent": 35 00:11:57.374 } 00:11:57.374 }, 00:11:57.374 "base_bdevs_list": [ 00:11:57.374 { 00:11:57.374 "name": "spare", 00:11:57.375 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:57.375 "is_configured": true, 00:11:57.375 "data_offset": 2048, 00:11:57.375 "data_size": 63488 00:11:57.375 }, 00:11:57.375 { 00:11:57.375 "name": "BaseBdev2", 00:11:57.375 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:57.375 "is_configured": true, 00:11:57.375 "data_offset": 2048, 00:11:57.375 "data_size": 63488 00:11:57.375 } 00:11:57.375 ] 00:11:57.375 }' 00:11:57.375 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.375 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.375 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.375 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.375 18:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.316 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.576 "name": "raid_bdev1", 00:11:58.576 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:58.576 "strip_size_kb": 0, 00:11:58.576 "state": "online", 00:11:58.576 "raid_level": "raid1", 00:11:58.576 "superblock": true, 00:11:58.576 "num_base_bdevs": 2, 00:11:58.576 "num_base_bdevs_discovered": 2, 00:11:58.576 "num_base_bdevs_operational": 2, 00:11:58.576 "process": { 00:11:58.576 "type": "rebuild", 00:11:58.576 "target": "spare", 00:11:58.576 "progress": { 00:11:58.576 "blocks": 47104, 00:11:58.576 "percent": 74 00:11:58.576 } 00:11:58.576 }, 00:11:58.576 "base_bdevs_list": [ 00:11:58.576 { 00:11:58.576 "name": "spare", 00:11:58.576 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:58.576 "is_configured": true, 00:11:58.576 "data_offset": 2048, 00:11:58.576 "data_size": 63488 00:11:58.576 }, 00:11:58.576 { 00:11:58.576 "name": "BaseBdev2", 00:11:58.576 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:58.576 "is_configured": true, 00:11:58.576 "data_offset": 2048, 00:11:58.576 "data_size": 63488 00:11:58.576 } 00:11:58.576 ] 00:11:58.576 }' 00:11:58.576 18:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.576 18:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.576 18:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.576 18:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.576 18:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.145 [2024-11-28 18:52:28.729643] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:59.145 [2024-11-28 18:52:28.729716] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:59.145 [2024-11-28 18:52:28.729806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.714 "name": "raid_bdev1", 00:11:59.714 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:59.714 "strip_size_kb": 0, 00:11:59.714 "state": "online", 00:11:59.714 "raid_level": "raid1", 00:11:59.714 "superblock": true, 00:11:59.714 "num_base_bdevs": 2, 00:11:59.714 "num_base_bdevs_discovered": 2, 00:11:59.714 "num_base_bdevs_operational": 2, 00:11:59.714 "base_bdevs_list": [ 00:11:59.714 { 00:11:59.714 "name": "spare", 00:11:59.714 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:59.714 "is_configured": true, 00:11:59.714 "data_offset": 2048, 00:11:59.714 "data_size": 63488 00:11:59.714 }, 00:11:59.714 { 00:11:59.714 "name": "BaseBdev2", 00:11:59.714 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:59.714 "is_configured": true, 00:11:59.714 "data_offset": 2048, 00:11:59.714 "data_size": 63488 00:11:59.714 } 00:11:59.714 ] 00:11:59.714 }' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.714 "name": "raid_bdev1", 00:11:59.714 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:59.714 "strip_size_kb": 0, 00:11:59.714 "state": "online", 00:11:59.714 "raid_level": "raid1", 00:11:59.714 "superblock": true, 00:11:59.714 "num_base_bdevs": 2, 00:11:59.714 "num_base_bdevs_discovered": 2, 00:11:59.714 "num_base_bdevs_operational": 2, 00:11:59.714 "base_bdevs_list": [ 00:11:59.714 { 00:11:59.714 "name": "spare", 00:11:59.714 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:59.714 "is_configured": true, 00:11:59.714 "data_offset": 2048, 00:11:59.714 "data_size": 63488 00:11:59.714 }, 00:11:59.714 { 00:11:59.714 "name": "BaseBdev2", 00:11:59.714 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:59.714 "is_configured": true, 00:11:59.714 "data_offset": 2048, 00:11:59.714 "data_size": 63488 00:11:59.714 } 00:11:59.714 ] 00:11:59.714 }' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.714 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.974 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.974 "name": "raid_bdev1", 00:11:59.974 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:11:59.974 "strip_size_kb": 0, 00:11:59.974 "state": "online", 00:11:59.974 "raid_level": "raid1", 00:11:59.974 "superblock": true, 00:11:59.974 "num_base_bdevs": 2, 00:11:59.974 "num_base_bdevs_discovered": 2, 00:11:59.974 "num_base_bdevs_operational": 2, 00:11:59.974 "base_bdevs_list": [ 00:11:59.974 { 00:11:59.974 "name": "spare", 00:11:59.974 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:11:59.974 "is_configured": true, 00:11:59.974 "data_offset": 2048, 00:11:59.974 "data_size": 63488 00:11:59.974 }, 00:11:59.974 { 00:11:59.974 "name": "BaseBdev2", 00:11:59.974 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:11:59.974 "is_configured": true, 00:11:59.974 "data_offset": 2048, 00:11:59.974 "data_size": 63488 00:11:59.974 } 00:11:59.974 ] 00:11:59.974 }' 00:11:59.975 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.975 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.234 [2024-11-28 18:52:29.782530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.234 [2024-11-28 18:52:29.782602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.234 [2024-11-28 18:52:29.782721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.234 [2024-11-28 18:52:29.782829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.234 [2024-11-28 18:52:29.782887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.234 18:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:00.494 18:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:00.494 /dev/nbd0 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:00.494 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.755 1+0 records in 00:12:00.755 1+0 records out 00:12:00.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631037 s, 6.5 MB/s 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:00.755 /dev/nbd1 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.755 1+0 records in 00:12:00.755 1+0 records out 00:12:00.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370655 s, 11.1 MB/s 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:00.755 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.015 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.275 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 [2024-11-28 18:52:30.869651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.275 [2024-11-28 18:52:30.869699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.275 [2024-11-28 18:52:30.869723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:01.275 [2024-11-28 18:52:30.869732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.275 [2024-11-28 18:52:30.871876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.275 [2024-11-28 18:52:30.871913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.276 [2024-11-28 18:52:30.871995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:01.276 [2024-11-28 18:52:30.872038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.276 [2024-11-28 18:52:30.872151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.276 spare 00:12:01.276 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.276 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:01.276 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.276 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 [2024-11-28 18:52:30.972215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:01.536 [2024-11-28 18:52:30.972244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.536 [2024-11-28 18:52:30.972550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:12:01.536 [2024-11-28 18:52:30.972705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:01.536 [2024-11-28 18:52:30.972716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:01.536 [2024-11-28 18:52:30.972847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 18:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.536 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.536 "name": "raid_bdev1", 00:12:01.536 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:01.536 "strip_size_kb": 0, 00:12:01.536 "state": "online", 00:12:01.536 "raid_level": "raid1", 00:12:01.536 "superblock": true, 00:12:01.536 "num_base_bdevs": 2, 00:12:01.536 "num_base_bdevs_discovered": 2, 00:12:01.536 "num_base_bdevs_operational": 2, 00:12:01.536 "base_bdevs_list": [ 00:12:01.536 { 00:12:01.536 "name": "spare", 00:12:01.536 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:12:01.536 "is_configured": true, 00:12:01.536 "data_offset": 2048, 00:12:01.536 "data_size": 63488 00:12:01.536 }, 00:12:01.536 { 00:12:01.536 "name": "BaseBdev2", 00:12:01.536 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:01.536 "is_configured": true, 00:12:01.536 "data_offset": 2048, 00:12:01.536 "data_size": 63488 00:12:01.536 } 00:12:01.536 ] 00:12:01.536 }' 00:12:01.536 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.536 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.795 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.055 "name": "raid_bdev1", 00:12:02.055 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:02.055 "strip_size_kb": 0, 00:12:02.055 "state": "online", 00:12:02.055 "raid_level": "raid1", 00:12:02.055 "superblock": true, 00:12:02.055 "num_base_bdevs": 2, 00:12:02.055 "num_base_bdevs_discovered": 2, 00:12:02.055 "num_base_bdevs_operational": 2, 00:12:02.055 "base_bdevs_list": [ 00:12:02.055 { 00:12:02.055 "name": "spare", 00:12:02.055 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:12:02.055 "is_configured": true, 00:12:02.055 "data_offset": 2048, 00:12:02.055 "data_size": 63488 00:12:02.055 }, 00:12:02.055 { 00:12:02.055 "name": "BaseBdev2", 00:12:02.055 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:02.055 "is_configured": true, 00:12:02.055 "data_offset": 2048, 00:12:02.055 "data_size": 63488 00:12:02.055 } 00:12:02.055 ] 00:12:02.055 }' 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:02.055 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.056 [2024-11-28 18:52:31.557839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.056 "name": "raid_bdev1", 00:12:02.056 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:02.056 "strip_size_kb": 0, 00:12:02.056 "state": "online", 00:12:02.056 "raid_level": "raid1", 00:12:02.056 "superblock": true, 00:12:02.056 "num_base_bdevs": 2, 00:12:02.056 "num_base_bdevs_discovered": 1, 00:12:02.056 "num_base_bdevs_operational": 1, 00:12:02.056 "base_bdevs_list": [ 00:12:02.056 { 00:12:02.056 "name": null, 00:12:02.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.056 "is_configured": false, 00:12:02.056 "data_offset": 0, 00:12:02.056 "data_size": 63488 00:12:02.056 }, 00:12:02.056 { 00:12:02.056 "name": "BaseBdev2", 00:12:02.056 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:02.056 "is_configured": true, 00:12:02.056 "data_offset": 2048, 00:12:02.056 "data_size": 63488 00:12:02.056 } 00:12:02.056 ] 00:12:02.056 }' 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.056 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.624 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:02.624 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.624 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.624 [2024-11-28 18:52:31.965980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.624 [2024-11-28 18:52:31.966194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:02.625 [2024-11-28 18:52:31.966217] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:02.625 [2024-11-28 18:52:31.966250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.625 [2024-11-28 18:52:31.971071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:12:02.625 18:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.625 18:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:02.625 [2024-11-28 18:52:31.973070] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.562 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.562 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.562 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.563 18:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.563 "name": "raid_bdev1", 00:12:03.563 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:03.563 "strip_size_kb": 0, 00:12:03.563 "state": "online", 00:12:03.563 "raid_level": "raid1", 00:12:03.563 "superblock": true, 00:12:03.563 "num_base_bdevs": 2, 00:12:03.563 "num_base_bdevs_discovered": 2, 00:12:03.563 "num_base_bdevs_operational": 2, 00:12:03.563 "process": { 00:12:03.563 "type": "rebuild", 00:12:03.563 "target": "spare", 00:12:03.563 "progress": { 00:12:03.563 "blocks": 20480, 00:12:03.563 "percent": 32 00:12:03.563 } 00:12:03.563 }, 00:12:03.563 "base_bdevs_list": [ 00:12:03.563 { 00:12:03.563 "name": "spare", 00:12:03.563 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 }, 00:12:03.563 { 00:12:03.563 "name": "BaseBdev2", 00:12:03.563 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 } 00:12:03.563 ] 00:12:03.563 }' 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.563 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.563 [2024-11-28 18:52:33.131683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.823 [2024-11-28 18:52:33.179078] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:03.823 [2024-11-28 18:52:33.179136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.823 [2024-11-28 18:52:33.179150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.823 [2024-11-28 18:52:33.179159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.823 "name": "raid_bdev1", 00:12:03.823 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:03.823 "strip_size_kb": 0, 00:12:03.823 "state": "online", 00:12:03.823 "raid_level": "raid1", 00:12:03.823 "superblock": true, 00:12:03.823 "num_base_bdevs": 2, 00:12:03.823 "num_base_bdevs_discovered": 1, 00:12:03.823 "num_base_bdevs_operational": 1, 00:12:03.823 "base_bdevs_list": [ 00:12:03.823 { 00:12:03.823 "name": null, 00:12:03.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.823 "is_configured": false, 00:12:03.823 "data_offset": 0, 00:12:03.823 "data_size": 63488 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "name": "BaseBdev2", 00:12:03.823 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:03.823 "is_configured": true, 00:12:03.823 "data_offset": 2048, 00:12:03.823 "data_size": 63488 00:12:03.823 } 00:12:03.823 ] 00:12:03.823 }' 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.823 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.083 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:04.083 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.083 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.083 [2024-11-28 18:52:33.631867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:04.083 [2024-11-28 18:52:33.631970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.083 [2024-11-28 18:52:33.632006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:04.083 [2024-11-28 18:52:33.632039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.083 [2024-11-28 18:52:33.632487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.083 [2024-11-28 18:52:33.632549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:04.083 [2024-11-28 18:52:33.632662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:04.083 [2024-11-28 18:52:33.632706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:04.083 [2024-11-28 18:52:33.632749] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:04.083 [2024-11-28 18:52:33.632799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.083 [2024-11-28 18:52:33.637482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:12:04.083 spare 00:12:04.083 18:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.083 18:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:04.083 [2024-11-28 18:52:33.639363] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.465 "name": "raid_bdev1", 00:12:05.465 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:05.465 "strip_size_kb": 0, 00:12:05.465 "state": "online", 00:12:05.465 "raid_level": "raid1", 00:12:05.465 "superblock": true, 00:12:05.465 "num_base_bdevs": 2, 00:12:05.465 "num_base_bdevs_discovered": 2, 00:12:05.465 "num_base_bdevs_operational": 2, 00:12:05.465 "process": { 00:12:05.465 "type": "rebuild", 00:12:05.465 "target": "spare", 00:12:05.465 "progress": { 00:12:05.465 "blocks": 20480, 00:12:05.465 "percent": 32 00:12:05.465 } 00:12:05.465 }, 00:12:05.465 "base_bdevs_list": [ 00:12:05.465 { 00:12:05.465 "name": "spare", 00:12:05.465 "uuid": "be78a4a5-3bf3-5c49-ad8f-e34ac5c3aba6", 00:12:05.465 "is_configured": true, 00:12:05.465 "data_offset": 2048, 00:12:05.465 "data_size": 63488 00:12:05.465 }, 00:12:05.465 { 00:12:05.465 "name": "BaseBdev2", 00:12:05.465 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:05.465 "is_configured": true, 00:12:05.465 "data_offset": 2048, 00:12:05.465 "data_size": 63488 00:12:05.465 } 00:12:05.465 ] 00:12:05.465 }' 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.465 [2024-11-28 18:52:34.774018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.465 [2024-11-28 18:52:34.845499] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.465 [2024-11-28 18:52:34.845551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.465 [2024-11-28 18:52:34.845567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.465 [2024-11-28 18:52:34.845574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.465 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.465 "name": "raid_bdev1", 00:12:05.465 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:05.465 "strip_size_kb": 0, 00:12:05.465 "state": "online", 00:12:05.465 "raid_level": "raid1", 00:12:05.465 "superblock": true, 00:12:05.465 "num_base_bdevs": 2, 00:12:05.465 "num_base_bdevs_discovered": 1, 00:12:05.465 "num_base_bdevs_operational": 1, 00:12:05.465 "base_bdevs_list": [ 00:12:05.465 { 00:12:05.465 "name": null, 00:12:05.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.466 "is_configured": false, 00:12:05.466 "data_offset": 0, 00:12:05.466 "data_size": 63488 00:12:05.466 }, 00:12:05.466 { 00:12:05.466 "name": "BaseBdev2", 00:12:05.466 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:05.466 "is_configured": true, 00:12:05.466 "data_offset": 2048, 00:12:05.466 "data_size": 63488 00:12:05.466 } 00:12:05.466 ] 00:12:05.466 }' 00:12:05.466 18:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.466 18:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.724 "name": "raid_bdev1", 00:12:05.724 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:05.724 "strip_size_kb": 0, 00:12:05.724 "state": "online", 00:12:05.724 "raid_level": "raid1", 00:12:05.724 "superblock": true, 00:12:05.724 "num_base_bdevs": 2, 00:12:05.724 "num_base_bdevs_discovered": 1, 00:12:05.724 "num_base_bdevs_operational": 1, 00:12:05.724 "base_bdevs_list": [ 00:12:05.724 { 00:12:05.724 "name": null, 00:12:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.724 "is_configured": false, 00:12:05.724 "data_offset": 0, 00:12:05.724 "data_size": 63488 00:12:05.724 }, 00:12:05.724 { 00:12:05.724 "name": "BaseBdev2", 00:12:05.724 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:05.724 "is_configured": true, 00:12:05.724 "data_offset": 2048, 00:12:05.724 "data_size": 63488 00:12:05.724 } 00:12:05.724 ] 00:12:05.724 }' 00:12:05.724 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 [2024-11-28 18:52:35.438253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:05.983 [2024-11-28 18:52:35.438307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.983 [2024-11-28 18:52:35.438327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:05.983 [2024-11-28 18:52:35.438336] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.983 [2024-11-28 18:52:35.438750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.983 [2024-11-28 18:52:35.438768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.983 [2024-11-28 18:52:35.438844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:05.983 [2024-11-28 18:52:35.438858] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:05.983 [2024-11-28 18:52:35.438867] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:05.983 [2024-11-28 18:52:35.438878] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:05.983 BaseBdev1 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.983 18:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.921 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.921 "name": "raid_bdev1", 00:12:06.921 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:06.921 "strip_size_kb": 0, 00:12:06.921 "state": "online", 00:12:06.921 "raid_level": "raid1", 00:12:06.921 "superblock": true, 00:12:06.921 "num_base_bdevs": 2, 00:12:06.921 "num_base_bdevs_discovered": 1, 00:12:06.921 "num_base_bdevs_operational": 1, 00:12:06.921 "base_bdevs_list": [ 00:12:06.921 { 00:12:06.921 "name": null, 00:12:06.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.921 "is_configured": false, 00:12:06.921 "data_offset": 0, 00:12:06.921 "data_size": 63488 00:12:06.921 }, 00:12:06.922 { 00:12:06.922 "name": "BaseBdev2", 00:12:06.922 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:06.922 "is_configured": true, 00:12:06.922 "data_offset": 2048, 00:12:06.922 "data_size": 63488 00:12:06.922 } 00:12:06.922 ] 00:12:06.922 }' 00:12:06.922 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.922 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.491 "name": "raid_bdev1", 00:12:07.491 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:07.491 "strip_size_kb": 0, 00:12:07.491 "state": "online", 00:12:07.491 "raid_level": "raid1", 00:12:07.491 "superblock": true, 00:12:07.491 "num_base_bdevs": 2, 00:12:07.491 "num_base_bdevs_discovered": 1, 00:12:07.491 "num_base_bdevs_operational": 1, 00:12:07.491 "base_bdevs_list": [ 00:12:07.491 { 00:12:07.491 "name": null, 00:12:07.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.491 "is_configured": false, 00:12:07.491 "data_offset": 0, 00:12:07.491 "data_size": 63488 00:12:07.491 }, 00:12:07.491 { 00:12:07.491 "name": "BaseBdev2", 00:12:07.491 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:07.491 "is_configured": true, 00:12:07.491 "data_offset": 2048, 00:12:07.491 "data_size": 63488 00:12:07.491 } 00:12:07.491 ] 00:12:07.491 }' 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.492 [2024-11-28 18:52:36.974694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.492 [2024-11-28 18:52:36.974908] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:07.492 [2024-11-28 18:52:36.974927] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:07.492 request: 00:12:07.492 { 00:12:07.492 "base_bdev": "BaseBdev1", 00:12:07.492 "raid_bdev": "raid_bdev1", 00:12:07.492 "method": "bdev_raid_add_base_bdev", 00:12:07.492 "req_id": 1 00:12:07.492 } 00:12:07.492 Got JSON-RPC error response 00:12:07.492 response: 00:12:07.492 { 00:12:07.492 "code": -22, 00:12:07.492 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:07.492 } 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.492 18:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:08.430 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.430 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.430 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.431 18:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.431 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.690 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.690 "name": "raid_bdev1", 00:12:08.690 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:08.690 "strip_size_kb": 0, 00:12:08.690 "state": "online", 00:12:08.690 "raid_level": "raid1", 00:12:08.690 "superblock": true, 00:12:08.690 "num_base_bdevs": 2, 00:12:08.690 "num_base_bdevs_discovered": 1, 00:12:08.690 "num_base_bdevs_operational": 1, 00:12:08.690 "base_bdevs_list": [ 00:12:08.690 { 00:12:08.690 "name": null, 00:12:08.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.690 "is_configured": false, 00:12:08.690 "data_offset": 0, 00:12:08.690 "data_size": 63488 00:12:08.690 }, 00:12:08.690 { 00:12:08.690 "name": "BaseBdev2", 00:12:08.690 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:08.690 "is_configured": true, 00:12:08.690 "data_offset": 2048, 00:12:08.690 "data_size": 63488 00:12:08.690 } 00:12:08.690 ] 00:12:08.690 }' 00:12:08.690 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.690 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.949 "name": "raid_bdev1", 00:12:08.949 "uuid": "041d2f1d-ba30-41e3-8ede-bb6fe189625b", 00:12:08.949 "strip_size_kb": 0, 00:12:08.949 "state": "online", 00:12:08.949 "raid_level": "raid1", 00:12:08.949 "superblock": true, 00:12:08.949 "num_base_bdevs": 2, 00:12:08.949 "num_base_bdevs_discovered": 1, 00:12:08.949 "num_base_bdevs_operational": 1, 00:12:08.949 "base_bdevs_list": [ 00:12:08.949 { 00:12:08.949 "name": null, 00:12:08.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.949 "is_configured": false, 00:12:08.949 "data_offset": 0, 00:12:08.949 "data_size": 63488 00:12:08.949 }, 00:12:08.949 { 00:12:08.949 "name": "BaseBdev2", 00:12:08.949 "uuid": "7c735f1d-3588-5ef9-b464-4ebb17290cf0", 00:12:08.949 "is_configured": true, 00:12:08.949 "data_offset": 2048, 00:12:08.949 "data_size": 63488 00:12:08.949 } 00:12:08.949 ] 00:12:08.949 }' 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.949 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 87907 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 87907 ']' 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 87907 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87907 00:12:09.209 killing process with pid 87907 00:12:09.209 Received shutdown signal, test time was about 60.000000 seconds 00:12:09.209 00:12:09.209 Latency(us) 00:12:09.209 [2024-11-28T18:52:38.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.209 [2024-11-28T18:52:38.815Z] =================================================================================================================== 00:12:09.209 [2024-11-28T18:52:38.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87907' 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 87907 00:12:09.209 [2024-11-28 18:52:38.605803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.209 [2024-11-28 18:52:38.605958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.209 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 87907 00:12:09.209 [2024-11-28 18:52:38.606010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.209 [2024-11-28 18:52:38.606022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:09.209 [2024-11-28 18:52:38.637904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:09.470 ************************************ 00:12:09.470 END TEST raid_rebuild_test_sb 00:12:09.470 ************************************ 00:12:09.470 00:12:09.470 real 0m21.602s 00:12:09.470 user 0m26.307s 00:12:09.470 sys 0m3.749s 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.470 18:52:38 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:09.470 18:52:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:09.470 18:52:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.470 18:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.470 ************************************ 00:12:09.470 START TEST raid_rebuild_test_io 00:12:09.470 ************************************ 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88623 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88623 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 88623 ']' 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.470 18:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.470 [2024-11-28 18:52:39.006592] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:09.470 [2024-11-28 18:52:39.007166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.470 Zero copy mechanism will not be used. 00:12:09.470 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88623 ] 00:12:09.730 [2024-11-28 18:52:39.141383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:09.730 [2024-11-28 18:52:39.178850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.730 [2024-11-28 18:52:39.205637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.730 [2024-11-28 18:52:39.249355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.730 [2024-11-28 18:52:39.249490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.299 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.299 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:10.299 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.300 BaseBdev1_malloc 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.300 [2024-11-28 18:52:39.858479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.300 [2024-11-28 18:52:39.858638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.300 [2024-11-28 18:52:39.858687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.300 [2024-11-28 18:52:39.858724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.300 [2024-11-28 18:52:39.860885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.300 [2024-11-28 18:52:39.860960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.300 BaseBdev1 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.300 BaseBdev2_malloc 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.300 [2024-11-28 18:52:39.887032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:10.300 [2024-11-28 18:52:39.887139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.300 [2024-11-28 18:52:39.887189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.300 [2024-11-28 18:52:39.887217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.300 [2024-11-28 18:52:39.889341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.300 [2024-11-28 18:52:39.889425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.300 BaseBdev2 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.300 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 spare_malloc 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 spare_delay 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 [2024-11-28 18:52:39.927768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.560 [2024-11-28 18:52:39.927826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.560 [2024-11-28 18:52:39.927844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.560 [2024-11-28 18:52:39.927857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.560 [2024-11-28 18:52:39.929910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.560 [2024-11-28 18:52:39.929949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.560 spare 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 [2024-11-28 18:52:39.939823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.560 [2024-11-28 18:52:39.941708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.560 [2024-11-28 18:52:39.941788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:10.560 [2024-11-28 18:52:39.941799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.560 [2024-11-28 18:52:39.942039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:10.560 [2024-11-28 18:52:39.942169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:10.560 [2024-11-28 18:52:39.942184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:10.560 [2024-11-28 18:52:39.942313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.560 "name": "raid_bdev1", 00:12:10.560 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:10.560 "strip_size_kb": 0, 00:12:10.560 "state": "online", 00:12:10.560 "raid_level": "raid1", 00:12:10.560 "superblock": false, 00:12:10.560 "num_base_bdevs": 2, 00:12:10.560 "num_base_bdevs_discovered": 2, 00:12:10.560 "num_base_bdevs_operational": 2, 00:12:10.560 "base_bdevs_list": [ 00:12:10.560 { 00:12:10.560 "name": "BaseBdev1", 00:12:10.560 "uuid": "5174753f-085a-5348-a9b4-d84cc9b2e624", 00:12:10.560 "is_configured": true, 00:12:10.560 "data_offset": 0, 00:12:10.560 "data_size": 65536 00:12:10.560 }, 00:12:10.560 { 00:12:10.560 "name": "BaseBdev2", 00:12:10.560 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:10.560 "is_configured": true, 00:12:10.560 "data_offset": 0, 00:12:10.560 "data_size": 65536 00:12:10.560 } 00:12:10.560 ] 00:12:10.560 }' 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.560 18:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.820 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.820 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.820 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.820 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:10.820 [2024-11-28 18:52:40.392216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.820 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 [2024-11-28 18:52:40.475957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.080 "name": "raid_bdev1", 00:12:11.080 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:11.080 "strip_size_kb": 0, 00:12:11.080 "state": "online", 00:12:11.080 "raid_level": "raid1", 00:12:11.080 "superblock": false, 00:12:11.080 "num_base_bdevs": 2, 00:12:11.080 "num_base_bdevs_discovered": 1, 00:12:11.080 "num_base_bdevs_operational": 1, 00:12:11.080 "base_bdevs_list": [ 00:12:11.080 { 00:12:11.080 "name": null, 00:12:11.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.080 "is_configured": false, 00:12:11.080 "data_offset": 0, 00:12:11.080 "data_size": 65536 00:12:11.080 }, 00:12:11.080 { 00:12:11.080 "name": "BaseBdev2", 00:12:11.080 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:11.080 "is_configured": true, 00:12:11.080 "data_offset": 0, 00:12:11.080 "data_size": 65536 00:12:11.080 } 00:12:11.080 ] 00:12:11.080 }' 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.080 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.080 [2024-11-28 18:52:40.550873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:11.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:11.080 Zero copy mechanism will not be used. 00:12:11.080 Running I/O for 60 seconds... 00:12:11.340 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.340 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.340 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.340 [2024-11-28 18:52:40.928868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.600 18:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.600 18:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:11.600 [2024-11-28 18:52:40.970798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:11.600 [2024-11-28 18:52:40.972878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.600 [2024-11-28 18:52:41.080670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.600 [2024-11-28 18:52:41.081114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.600 [2024-11-28 18:52:41.199586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:11.600 [2024-11-28 18:52:41.199920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.169 [2024-11-28 18:52:41.537183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.169 212.00 IOPS, 636.00 MiB/s [2024-11-28T18:52:41.775Z] [2024-11-28 18:52:41.746288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.429 18:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.429 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.429 "name": "raid_bdev1", 00:12:12.429 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:12.429 "strip_size_kb": 0, 00:12:12.429 "state": "online", 00:12:12.429 "raid_level": "raid1", 00:12:12.429 "superblock": false, 00:12:12.429 "num_base_bdevs": 2, 00:12:12.429 "num_base_bdevs_discovered": 2, 00:12:12.429 "num_base_bdevs_operational": 2, 00:12:12.429 "process": { 00:12:12.429 "type": "rebuild", 00:12:12.429 "target": "spare", 00:12:12.429 "progress": { 00:12:12.429 "blocks": 12288, 00:12:12.429 "percent": 18 00:12:12.429 } 00:12:12.429 }, 00:12:12.429 "base_bdevs_list": [ 00:12:12.429 { 00:12:12.429 "name": "spare", 00:12:12.429 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:12.429 "is_configured": true, 00:12:12.429 "data_offset": 0, 00:12:12.429 "data_size": 65536 00:12:12.429 }, 00:12:12.429 { 00:12:12.429 "name": "BaseBdev2", 00:12:12.429 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:12.429 "is_configured": true, 00:12:12.429 "data_offset": 0, 00:12:12.429 "data_size": 65536 00:12:12.429 } 00:12:12.429 ] 00:12:12.429 }' 00:12:12.429 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.690 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.690 [2024-11-28 18:52:42.096085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.690 [2024-11-28 18:52:42.164623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:12.690 [2024-11-28 18:52:42.265736] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:12.690 [2024-11-28 18:52:42.273003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.690 [2024-11-28 18:52:42.273042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.690 [2024-11-28 18:52:42.273053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:12.690 [2024-11-28 18:52:42.290616] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.950 "name": "raid_bdev1", 00:12:12.950 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:12.950 "strip_size_kb": 0, 00:12:12.950 "state": "online", 00:12:12.950 "raid_level": "raid1", 00:12:12.950 "superblock": false, 00:12:12.950 "num_base_bdevs": 2, 00:12:12.950 "num_base_bdevs_discovered": 1, 00:12:12.950 "num_base_bdevs_operational": 1, 00:12:12.950 "base_bdevs_list": [ 00:12:12.950 { 00:12:12.950 "name": null, 00:12:12.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.950 "is_configured": false, 00:12:12.950 "data_offset": 0, 00:12:12.950 "data_size": 65536 00:12:12.950 }, 00:12:12.950 { 00:12:12.950 "name": "BaseBdev2", 00:12:12.950 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:12.950 "is_configured": true, 00:12:12.950 "data_offset": 0, 00:12:12.950 "data_size": 65536 00:12:12.950 } 00:12:12.950 ] 00:12:12.950 }' 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.950 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 176.00 IOPS, 528.00 MiB/s [2024-11-28T18:52:42.816Z] 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.210 "name": "raid_bdev1", 00:12:13.210 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:13.210 "strip_size_kb": 0, 00:12:13.210 "state": "online", 00:12:13.210 "raid_level": "raid1", 00:12:13.210 "superblock": false, 00:12:13.210 "num_base_bdevs": 2, 00:12:13.210 "num_base_bdevs_discovered": 1, 00:12:13.210 "num_base_bdevs_operational": 1, 00:12:13.210 "base_bdevs_list": [ 00:12:13.210 { 00:12:13.210 "name": null, 00:12:13.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.210 "is_configured": false, 00:12:13.210 "data_offset": 0, 00:12:13.210 "data_size": 65536 00:12:13.210 }, 00:12:13.210 { 00:12:13.210 "name": "BaseBdev2", 00:12:13.210 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:13.210 "is_configured": true, 00:12:13.210 "data_offset": 0, 00:12:13.210 "data_size": 65536 00:12:13.210 } 00:12:13.210 ] 00:12:13.210 }' 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.210 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.470 [2024-11-28 18:52:42.873707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.470 18:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:13.470 [2024-11-28 18:52:42.919481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:13.470 [2024-11-28 18:52:42.921406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.470 [2024-11-28 18:52:43.043154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.470 [2024-11-28 18:52:43.043590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:13.729 [2024-11-28 18:52:43.258098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:13.729 [2024-11-28 18:52:43.258338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:14.248 175.33 IOPS, 526.00 MiB/s [2024-11-28T18:52:43.854Z] [2024-11-28 18:52:43.681222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.512 "name": "raid_bdev1", 00:12:14.512 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:14.512 "strip_size_kb": 0, 00:12:14.512 "state": "online", 00:12:14.512 "raid_level": "raid1", 00:12:14.512 "superblock": false, 00:12:14.512 "num_base_bdevs": 2, 00:12:14.512 "num_base_bdevs_discovered": 2, 00:12:14.512 "num_base_bdevs_operational": 2, 00:12:14.512 "process": { 00:12:14.512 "type": "rebuild", 00:12:14.512 "target": "spare", 00:12:14.512 "progress": { 00:12:14.512 "blocks": 12288, 00:12:14.512 "percent": 18 00:12:14.512 } 00:12:14.512 }, 00:12:14.512 "base_bdevs_list": [ 00:12:14.512 { 00:12:14.512 "name": "spare", 00:12:14.512 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:14.512 "is_configured": true, 00:12:14.512 "data_offset": 0, 00:12:14.512 "data_size": 65536 00:12:14.512 }, 00:12:14.512 { 00:12:14.512 "name": "BaseBdev2", 00:12:14.512 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:14.512 "is_configured": true, 00:12:14.512 "data_offset": 0, 00:12:14.512 "data_size": 65536 00:12:14.512 } 00:12:14.512 ] 00:12:14.512 }' 00:12:14.512 18:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=317 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.512 "name": "raid_bdev1", 00:12:14.512 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:14.512 "strip_size_kb": 0, 00:12:14.512 "state": "online", 00:12:14.512 "raid_level": "raid1", 00:12:14.512 "superblock": false, 00:12:14.512 "num_base_bdevs": 2, 00:12:14.512 "num_base_bdevs_discovered": 2, 00:12:14.512 "num_base_bdevs_operational": 2, 00:12:14.512 "process": { 00:12:14.512 "type": "rebuild", 00:12:14.512 "target": "spare", 00:12:14.512 "progress": { 00:12:14.512 "blocks": 14336, 00:12:14.512 "percent": 21 00:12:14.512 } 00:12:14.512 }, 00:12:14.512 "base_bdevs_list": [ 00:12:14.512 { 00:12:14.512 "name": "spare", 00:12:14.512 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:14.512 "is_configured": true, 00:12:14.512 "data_offset": 0, 00:12:14.512 "data_size": 65536 00:12:14.512 }, 00:12:14.512 { 00:12:14.512 "name": "BaseBdev2", 00:12:14.512 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:14.512 "is_configured": true, 00:12:14.512 "data_offset": 0, 00:12:14.512 "data_size": 65536 00:12:14.512 } 00:12:14.512 ] 00:12:14.512 }' 00:12:14.512 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.781 [2024-11-28 18:52:44.128470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:14.781 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.781 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.781 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.781 18:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.051 142.50 IOPS, 427.50 MiB/s [2024-11-28T18:52:44.657Z] [2024-11-28 18:52:44.573072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:15.319 [2024-11-28 18:52:44.895717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.888 [2024-11-28 18:52:45.205863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.888 "name": "raid_bdev1", 00:12:15.888 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:15.888 "strip_size_kb": 0, 00:12:15.888 "state": "online", 00:12:15.888 "raid_level": "raid1", 00:12:15.888 "superblock": false, 00:12:15.888 "num_base_bdevs": 2, 00:12:15.888 "num_base_bdevs_discovered": 2, 00:12:15.888 "num_base_bdevs_operational": 2, 00:12:15.888 "process": { 00:12:15.888 "type": "rebuild", 00:12:15.888 "target": "spare", 00:12:15.888 "progress": { 00:12:15.888 "blocks": 32768, 00:12:15.888 "percent": 50 00:12:15.888 } 00:12:15.888 }, 00:12:15.888 "base_bdevs_list": [ 00:12:15.888 { 00:12:15.888 "name": "spare", 00:12:15.888 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:15.888 "is_configured": true, 00:12:15.888 "data_offset": 0, 00:12:15.888 "data_size": 65536 00:12:15.888 }, 00:12:15.888 { 00:12:15.888 "name": "BaseBdev2", 00:12:15.888 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:15.888 "is_configured": true, 00:12:15.888 "data_offset": 0, 00:12:15.888 "data_size": 65536 00:12:15.888 } 00:12:15.888 ] 00:12:15.888 }' 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.888 18:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.888 [2024-11-28 18:52:45.325894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:16.148 [2024-11-28 18:52:45.550872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:16.148 [2024-11-28 18:52:45.551267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:16.408 125.60 IOPS, 376.80 MiB/s [2024-11-28T18:52:46.014Z] [2024-11-28 18:52:45.764921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:16.668 [2024-11-28 18:52:46.105437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.929 "name": "raid_bdev1", 00:12:16.929 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:16.929 "strip_size_kb": 0, 00:12:16.929 "state": "online", 00:12:16.929 "raid_level": "raid1", 00:12:16.929 "superblock": false, 00:12:16.929 "num_base_bdevs": 2, 00:12:16.929 "num_base_bdevs_discovered": 2, 00:12:16.929 "num_base_bdevs_operational": 2, 00:12:16.929 "process": { 00:12:16.929 "type": "rebuild", 00:12:16.929 "target": "spare", 00:12:16.929 "progress": { 00:12:16.929 "blocks": 49152, 00:12:16.929 "percent": 75 00:12:16.929 } 00:12:16.929 }, 00:12:16.929 "base_bdevs_list": [ 00:12:16.929 { 00:12:16.929 "name": "spare", 00:12:16.929 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:16.929 "is_configured": true, 00:12:16.929 "data_offset": 0, 00:12:16.929 "data_size": 65536 00:12:16.929 }, 00:12:16.929 { 00:12:16.929 "name": "BaseBdev2", 00:12:16.929 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:16.929 "is_configured": true, 00:12:16.929 "data_offset": 0, 00:12:16.929 "data_size": 65536 00:12:16.929 } 00:12:16.929 ] 00:12:16.929 }' 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.929 18:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.189 [2024-11-28 18:52:46.550834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:17.449 112.83 IOPS, 338.50 MiB/s [2024-11-28T18:52:47.055Z] [2024-11-28 18:52:46.871230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:17.708 [2024-11-28 18:52:47.304018] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:17.969 [2024-11-28 18:52:47.409379] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:17.969 [2024-11-28 18:52:47.411745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.969 "name": "raid_bdev1", 00:12:17.969 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:17.969 "strip_size_kb": 0, 00:12:17.969 "state": "online", 00:12:17.969 "raid_level": "raid1", 00:12:17.969 "superblock": false, 00:12:17.969 "num_base_bdevs": 2, 00:12:17.969 "num_base_bdevs_discovered": 2, 00:12:17.969 "num_base_bdevs_operational": 2, 00:12:17.969 "base_bdevs_list": [ 00:12:17.969 { 00:12:17.969 "name": "spare", 00:12:17.969 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:17.969 "is_configured": true, 00:12:17.969 "data_offset": 0, 00:12:17.969 "data_size": 65536 00:12:17.969 }, 00:12:17.969 { 00:12:17.969 "name": "BaseBdev2", 00:12:17.969 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:17.969 "is_configured": true, 00:12:17.969 "data_offset": 0, 00:12:17.969 "data_size": 65536 00:12:17.969 } 00:12:17.969 ] 00:12:17.969 }' 00:12:17.969 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.229 100.86 IOPS, 302.57 MiB/s [2024-11-28T18:52:47.835Z] 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.229 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.230 "name": "raid_bdev1", 00:12:18.230 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:18.230 "strip_size_kb": 0, 00:12:18.230 "state": "online", 00:12:18.230 "raid_level": "raid1", 00:12:18.230 "superblock": false, 00:12:18.230 "num_base_bdevs": 2, 00:12:18.230 "num_base_bdevs_discovered": 2, 00:12:18.230 "num_base_bdevs_operational": 2, 00:12:18.230 "base_bdevs_list": [ 00:12:18.230 { 00:12:18.230 "name": "spare", 00:12:18.230 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:18.230 "is_configured": true, 00:12:18.230 "data_offset": 0, 00:12:18.230 "data_size": 65536 00:12:18.230 }, 00:12:18.230 { 00:12:18.230 "name": "BaseBdev2", 00:12:18.230 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:18.230 "is_configured": true, 00:12:18.230 "data_offset": 0, 00:12:18.230 "data_size": 65536 00:12:18.230 } 00:12:18.230 ] 00:12:18.230 }' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.230 "name": "raid_bdev1", 00:12:18.230 "uuid": "b4657608-0441-4ef1-8238-fee43f3cfe73", 00:12:18.230 "strip_size_kb": 0, 00:12:18.230 "state": "online", 00:12:18.230 "raid_level": "raid1", 00:12:18.230 "superblock": false, 00:12:18.230 "num_base_bdevs": 2, 00:12:18.230 "num_base_bdevs_discovered": 2, 00:12:18.230 "num_base_bdevs_operational": 2, 00:12:18.230 "base_bdevs_list": [ 00:12:18.230 { 00:12:18.230 "name": "spare", 00:12:18.230 "uuid": "be8af134-bae8-52c6-b4b5-0995e5e671ce", 00:12:18.230 "is_configured": true, 00:12:18.230 "data_offset": 0, 00:12:18.230 "data_size": 65536 00:12:18.230 }, 00:12:18.230 { 00:12:18.230 "name": "BaseBdev2", 00:12:18.230 "uuid": "d27e9dfd-15ee-5d64-8bff-54febdae100e", 00:12:18.230 "is_configured": true, 00:12:18.230 "data_offset": 0, 00:12:18.230 "data_size": 65536 00:12:18.230 } 00:12:18.230 ] 00:12:18.230 }' 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.230 18:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.799 [2024-11-28 18:52:48.174343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.799 [2024-11-28 18:52:48.174475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.799 00:12:18.799 Latency(us) 00:12:18.799 [2024-11-28T18:52:48.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.799 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:18.799 raid_bdev1 : 7.71 95.43 286.28 0.00 0.00 14857.36 276.68 112415.97 00:12:18.799 [2024-11-28T18:52:48.405Z] =================================================================================================================== 00:12:18.799 [2024-11-28T18:52:48.405Z] Total : 95.43 286.28 0.00 0.00 14857.36 276.68 112415.97 00:12:18.799 [2024-11-28 18:52:48.269359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.799 [2024-11-28 18:52:48.269459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.799 [2024-11-28 18:52:48.269572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.799 [2024-11-28 18:52:48.269624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:18.799 { 00:12:18.799 "results": [ 00:12:18.799 { 00:12:18.799 "job": "raid_bdev1", 00:12:18.799 "core_mask": "0x1", 00:12:18.799 "workload": "randrw", 00:12:18.799 "percentage": 50, 00:12:18.799 "status": "finished", 00:12:18.799 "queue_depth": 2, 00:12:18.799 "io_size": 3145728, 00:12:18.799 "runtime": 7.712697, 00:12:18.799 "iops": 95.42706008028061, 00:12:18.799 "mibps": 286.2811802408418, 00:12:18.799 "io_failed": 0, 00:12:18.799 "io_timeout": 0, 00:12:18.799 "avg_latency_us": 14857.357500975828, 00:12:18.799 "min_latency_us": 276.6843894360673, 00:12:18.799 "max_latency_us": 112415.97489758563 00:12:18.799 } 00:12:18.799 ], 00:12:18.799 "core_count": 1 00:12:18.799 } 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.799 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:19.059 /dev/nbd0 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.059 1+0 records in 00:12:19.059 1+0 records out 00:12:19.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479756 s, 8.5 MB/s 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:19.059 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.060 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:19.320 /dev/nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.320 1+0 records in 00:12:19.320 1+0 records out 00:12:19.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520199 s, 7.9 MB/s 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.320 18:52:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.580 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88623 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 88623 ']' 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 88623 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88623 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88623' 00:12:19.840 killing process with pid 88623 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 88623 00:12:19.840 Received shutdown signal, test time was about 8.874064 seconds 00:12:19.840 00:12:19.840 Latency(us) 00:12:19.840 [2024-11-28T18:52:49.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.840 [2024-11-28T18:52:49.446Z] =================================================================================================================== 00:12:19.840 [2024-11-28T18:52:49.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.840 [2024-11-28 18:52:49.427792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.840 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 88623 00:12:20.100 [2024-11-28 18:52:49.454088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.100 18:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:20.100 00:12:20.100 real 0m10.739s 00:12:20.100 user 0m13.793s 00:12:20.100 sys 0m1.441s 00:12:20.100 ************************************ 00:12:20.100 END TEST raid_rebuild_test_io 00:12:20.100 ************************************ 00:12:20.100 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.100 18:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.360 18:52:49 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:20.360 18:52:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:20.360 18:52:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.360 18:52:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.360 ************************************ 00:12:20.360 START TEST raid_rebuild_test_sb_io 00:12:20.360 ************************************ 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.360 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88989 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88989 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 88989 ']' 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.361 18:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.361 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.361 Zero copy mechanism will not be used. 00:12:20.361 [2024-11-28 18:52:49.846932] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:20.361 [2024-11-28 18:52:49.847077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88989 ] 00:12:20.620 [2024-11-28 18:52:49.981807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:20.621 [2024-11-28 18:52:50.022366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.621 [2024-11-28 18:52:50.048054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.621 [2024-11-28 18:52:50.090974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.621 [2024-11-28 18:52:50.091024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 BaseBdev1_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 [2024-11-28 18:52:50.703573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:21.191 [2024-11-28 18:52:50.703641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.191 [2024-11-28 18:52:50.703664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.191 [2024-11-28 18:52:50.703686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.191 [2024-11-28 18:52:50.705849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.191 [2024-11-28 18:52:50.705889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.191 BaseBdev1 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 BaseBdev2_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 [2024-11-28 18:52:50.732295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:21.191 [2024-11-28 18:52:50.732413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.191 [2024-11-28 18:52:50.732482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.191 [2024-11-28 18:52:50.732521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.191 [2024-11-28 18:52:50.734566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.191 [2024-11-28 18:52:50.734639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.191 BaseBdev2 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 spare_malloc 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 spare_delay 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 [2024-11-28 18:52:50.773027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.191 [2024-11-28 18:52:50.773133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.191 [2024-11-28 18:52:50.773172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:21.191 [2024-11-28 18:52:50.773184] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.191 [2024-11-28 18:52:50.775252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.191 [2024-11-28 18:52:50.775293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.191 spare 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.191 [2024-11-28 18:52:50.785096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.191 [2024-11-28 18:52:50.786952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.191 [2024-11-28 18:52:50.787088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:21.191 [2024-11-28 18:52:50.787108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.191 [2024-11-28 18:52:50.787335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:21.191 [2024-11-28 18:52:50.787495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:21.191 [2024-11-28 18:52:50.787510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:21.191 [2024-11-28 18:52:50.787622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.191 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.192 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.451 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.451 "name": "raid_bdev1", 00:12:21.451 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:21.451 "strip_size_kb": 0, 00:12:21.451 "state": "online", 00:12:21.451 "raid_level": "raid1", 00:12:21.452 "superblock": true, 00:12:21.452 "num_base_bdevs": 2, 00:12:21.452 "num_base_bdevs_discovered": 2, 00:12:21.452 "num_base_bdevs_operational": 2, 00:12:21.452 "base_bdevs_list": [ 00:12:21.452 { 00:12:21.452 "name": "BaseBdev1", 00:12:21.452 "uuid": "1779d903-18d0-5947-b704-dc74cdfcb147", 00:12:21.452 "is_configured": true, 00:12:21.452 "data_offset": 2048, 00:12:21.452 "data_size": 63488 00:12:21.452 }, 00:12:21.452 { 00:12:21.452 "name": "BaseBdev2", 00:12:21.452 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:21.452 "is_configured": true, 00:12:21.452 "data_offset": 2048, 00:12:21.452 "data_size": 63488 00:12:21.452 } 00:12:21.452 ] 00:12:21.452 }' 00:12:21.452 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.452 18:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.711 [2024-11-28 18:52:51.245430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.711 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.971 [2024-11-28 18:52:51.329208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.971 "name": "raid_bdev1", 00:12:21.971 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:21.971 "strip_size_kb": 0, 00:12:21.971 "state": "online", 00:12:21.971 "raid_level": "raid1", 00:12:21.971 "superblock": true, 00:12:21.971 "num_base_bdevs": 2, 00:12:21.971 "num_base_bdevs_discovered": 1, 00:12:21.971 "num_base_bdevs_operational": 1, 00:12:21.971 "base_bdevs_list": [ 00:12:21.971 { 00:12:21.971 "name": null, 00:12:21.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.971 "is_configured": false, 00:12:21.971 "data_offset": 0, 00:12:21.971 "data_size": 63488 00:12:21.971 }, 00:12:21.971 { 00:12:21.971 "name": "BaseBdev2", 00:12:21.971 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:21.971 "is_configured": true, 00:12:21.971 "data_offset": 2048, 00:12:21.971 "data_size": 63488 00:12:21.971 } 00:12:21.971 ] 00:12:21.971 }' 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.971 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.971 [2024-11-28 18:52:51.435758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:21.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.971 Zero copy mechanism will not be used. 00:12:21.971 Running I/O for 60 seconds... 00:12:22.231 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.231 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.231 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.231 [2024-11-28 18:52:51.739491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.231 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.231 18:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:22.231 [2024-11-28 18:52:51.786696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:22.231 [2024-11-28 18:52:51.788695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.491 [2024-11-28 18:52:51.907476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.491 [2024-11-28 18:52:51.907966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.750 [2024-11-28 18:52:52.121110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.750 [2024-11-28 18:52:52.121379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:23.011 174.00 IOPS, 522.00 MiB/s [2024-11-28T18:52:52.617Z] [2024-11-28 18:52:52.461541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.271 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.271 "name": "raid_bdev1", 00:12:23.271 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:23.271 "strip_size_kb": 0, 00:12:23.271 "state": "online", 00:12:23.271 "raid_level": "raid1", 00:12:23.271 "superblock": true, 00:12:23.271 "num_base_bdevs": 2, 00:12:23.271 "num_base_bdevs_discovered": 2, 00:12:23.271 "num_base_bdevs_operational": 2, 00:12:23.271 "process": { 00:12:23.271 "type": "rebuild", 00:12:23.271 "target": "spare", 00:12:23.271 "progress": { 00:12:23.271 "blocks": 12288, 00:12:23.271 "percent": 19 00:12:23.271 } 00:12:23.271 }, 00:12:23.271 "base_bdevs_list": [ 00:12:23.271 { 00:12:23.271 "name": "spare", 00:12:23.271 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:23.271 "is_configured": true, 00:12:23.271 "data_offset": 2048, 00:12:23.272 "data_size": 63488 00:12:23.272 }, 00:12:23.272 { 00:12:23.272 "name": "BaseBdev2", 00:12:23.272 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:23.272 "is_configured": true, 00:12:23.272 "data_offset": 2048, 00:12:23.272 "data_size": 63488 00:12:23.272 } 00:12:23.272 ] 00:12:23.272 }' 00:12:23.272 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.272 [2024-11-28 18:52:52.816816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.272 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.272 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.532 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.532 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.532 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.532 18:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.532 [2024-11-28 18:52:52.911637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.532 [2024-11-28 18:52:52.958255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:23.532 [2024-11-28 18:52:53.058340] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.532 [2024-11-28 18:52:53.070344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.532 [2024-11-28 18:52:53.070382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.532 [2024-11-28 18:52:53.070403] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.532 [2024-11-28 18:52:53.091965] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.532 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.792 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.792 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.792 "name": "raid_bdev1", 00:12:23.792 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:23.792 "strip_size_kb": 0, 00:12:23.792 "state": "online", 00:12:23.792 "raid_level": "raid1", 00:12:23.792 "superblock": true, 00:12:23.793 "num_base_bdevs": 2, 00:12:23.793 "num_base_bdevs_discovered": 1, 00:12:23.793 "num_base_bdevs_operational": 1, 00:12:23.793 "base_bdevs_list": [ 00:12:23.793 { 00:12:23.793 "name": null, 00:12:23.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.793 "is_configured": false, 00:12:23.793 "data_offset": 0, 00:12:23.793 "data_size": 63488 00:12:23.793 }, 00:12:23.793 { 00:12:23.793 "name": "BaseBdev2", 00:12:23.793 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:23.793 "is_configured": true, 00:12:23.793 "data_offset": 2048, 00:12:23.793 "data_size": 63488 00:12:23.793 } 00:12:23.793 ] 00:12:23.793 }' 00:12:23.793 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.793 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 167.50 IOPS, 502.50 MiB/s [2024-11-28T18:52:53.659Z] 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.054 "name": "raid_bdev1", 00:12:24.054 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:24.054 "strip_size_kb": 0, 00:12:24.054 "state": "online", 00:12:24.054 "raid_level": "raid1", 00:12:24.054 "superblock": true, 00:12:24.054 "num_base_bdevs": 2, 00:12:24.054 "num_base_bdevs_discovered": 1, 00:12:24.054 "num_base_bdevs_operational": 1, 00:12:24.054 "base_bdevs_list": [ 00:12:24.054 { 00:12:24.054 "name": null, 00:12:24.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.054 "is_configured": false, 00:12:24.054 "data_offset": 0, 00:12:24.054 "data_size": 63488 00:12:24.054 }, 00:12:24.054 { 00:12:24.054 "name": "BaseBdev2", 00:12:24.054 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:24.054 "is_configured": true, 00:12:24.054 "data_offset": 2048, 00:12:24.054 "data_size": 63488 00:12:24.054 } 00:12:24.054 ] 00:12:24.054 }' 00:12:24.054 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.314 [2024-11-28 18:52:53.720854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.314 18:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:24.314 [2024-11-28 18:52:53.756934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:24.314 [2024-11-28 18:52:53.758910] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:24.314 [2024-11-28 18:52:53.870956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.314 [2024-11-28 18:52:53.871269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.574 [2024-11-28 18:52:54.085130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.574 [2024-11-28 18:52:54.085513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.834 [2024-11-28 18:52:54.327419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.834 [2024-11-28 18:52:54.327940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:25.094 158.33 IOPS, 475.00 MiB/s [2024-11-28T18:52:54.700Z] [2024-11-28 18:52:54.539969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:25.094 [2024-11-28 18:52:54.540155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.354 "name": "raid_bdev1", 00:12:25.354 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:25.354 "strip_size_kb": 0, 00:12:25.354 "state": "online", 00:12:25.354 "raid_level": "raid1", 00:12:25.354 "superblock": true, 00:12:25.354 "num_base_bdevs": 2, 00:12:25.354 "num_base_bdevs_discovered": 2, 00:12:25.354 "num_base_bdevs_operational": 2, 00:12:25.354 "process": { 00:12:25.354 "type": "rebuild", 00:12:25.354 "target": "spare", 00:12:25.354 "progress": { 00:12:25.354 "blocks": 12288, 00:12:25.354 "percent": 19 00:12:25.354 } 00:12:25.354 }, 00:12:25.354 "base_bdevs_list": [ 00:12:25.354 { 00:12:25.354 "name": "spare", 00:12:25.354 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:25.354 "is_configured": true, 00:12:25.354 "data_offset": 2048, 00:12:25.354 "data_size": 63488 00:12:25.354 }, 00:12:25.354 { 00:12:25.354 "name": "BaseBdev2", 00:12:25.354 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:25.354 "is_configured": true, 00:12:25.354 "data_offset": 2048, 00:12:25.354 "data_size": 63488 00:12:25.354 } 00:12:25.354 ] 00:12:25.354 }' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:25.354 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=327 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.354 "name": "raid_bdev1", 00:12:25.354 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:25.354 "strip_size_kb": 0, 00:12:25.354 "state": "online", 00:12:25.354 "raid_level": "raid1", 00:12:25.354 "superblock": true, 00:12:25.354 "num_base_bdevs": 2, 00:12:25.354 "num_base_bdevs_discovered": 2, 00:12:25.354 "num_base_bdevs_operational": 2, 00:12:25.354 "process": { 00:12:25.354 "type": "rebuild", 00:12:25.354 "target": "spare", 00:12:25.354 "progress": { 00:12:25.354 "blocks": 14336, 00:12:25.354 "percent": 22 00:12:25.354 } 00:12:25.354 }, 00:12:25.354 "base_bdevs_list": [ 00:12:25.354 { 00:12:25.354 "name": "spare", 00:12:25.354 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:25.354 "is_configured": true, 00:12:25.354 "data_offset": 2048, 00:12:25.354 "data_size": 63488 00:12:25.354 }, 00:12:25.354 { 00:12:25.354 "name": "BaseBdev2", 00:12:25.354 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:25.354 "is_configured": true, 00:12:25.354 "data_offset": 2048, 00:12:25.354 "data_size": 63488 00:12:25.354 } 00:12:25.354 ] 00:12:25.354 }' 00:12:25.354 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.614 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.614 18:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.614 18:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.614 18:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.614 [2024-11-28 18:52:55.217943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:26.445 136.25 IOPS, 408.75 MiB/s [2024-11-28T18:52:56.051Z] [2024-11-28 18:52:55.740068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.445 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.446 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.706 "name": "raid_bdev1", 00:12:26.706 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:26.706 "strip_size_kb": 0, 00:12:26.706 "state": "online", 00:12:26.706 "raid_level": "raid1", 00:12:26.706 "superblock": true, 00:12:26.706 "num_base_bdevs": 2, 00:12:26.706 "num_base_bdevs_discovered": 2, 00:12:26.706 "num_base_bdevs_operational": 2, 00:12:26.706 "process": { 00:12:26.706 "type": "rebuild", 00:12:26.706 "target": "spare", 00:12:26.706 "progress": { 00:12:26.706 "blocks": 30720, 00:12:26.706 "percent": 48 00:12:26.706 } 00:12:26.706 }, 00:12:26.706 "base_bdevs_list": [ 00:12:26.706 { 00:12:26.706 "name": "spare", 00:12:26.706 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:26.706 "is_configured": true, 00:12:26.706 "data_offset": 2048, 00:12:26.706 "data_size": 63488 00:12:26.706 }, 00:12:26.706 { 00:12:26.706 "name": "BaseBdev2", 00:12:26.706 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:26.706 "is_configured": true, 00:12:26.706 "data_offset": 2048, 00:12:26.706 "data_size": 63488 00:12:26.706 } 00:12:26.706 ] 00:12:26.706 }' 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.706 [2024-11-28 18:52:56.184754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.706 18:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.966 [2024-11-28 18:52:56.415051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:26.967 120.00 IOPS, 360.00 MiB/s [2024-11-28T18:52:56.573Z] [2024-11-28 18:52:56.538853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:27.225 [2024-11-28 18:52:56.751128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:27.485 [2024-11-28 18:52:56.975962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:27.745 [2024-11-28 18:52:57.194774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.745 "name": "raid_bdev1", 00:12:27.745 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:27.745 "strip_size_kb": 0, 00:12:27.745 "state": "online", 00:12:27.745 "raid_level": "raid1", 00:12:27.745 "superblock": true, 00:12:27.745 "num_base_bdevs": 2, 00:12:27.745 "num_base_bdevs_discovered": 2, 00:12:27.745 "num_base_bdevs_operational": 2, 00:12:27.745 "process": { 00:12:27.745 "type": "rebuild", 00:12:27.745 "target": "spare", 00:12:27.745 "progress": { 00:12:27.745 "blocks": 51200, 00:12:27.745 "percent": 80 00:12:27.745 } 00:12:27.745 }, 00:12:27.745 "base_bdevs_list": [ 00:12:27.745 { 00:12:27.745 "name": "spare", 00:12:27.745 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:27.745 "is_configured": true, 00:12:27.745 "data_offset": 2048, 00:12:27.745 "data_size": 63488 00:12:27.745 }, 00:12:27.745 { 00:12:27.745 "name": "BaseBdev2", 00:12:27.745 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:27.745 "is_configured": true, 00:12:27.745 "data_offset": 2048, 00:12:27.745 "data_size": 63488 00:12:27.745 } 00:12:27.745 ] 00:12:27.745 }' 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.745 18:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.265 106.67 IOPS, 320.00 MiB/s [2024-11-28T18:52:57.872Z] [2024-11-28 18:52:57.632817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:28.266 [2024-11-28 18:52:57.739566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:28.526 [2024-11-28 18:52:57.958714] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.526 [2024-11-28 18:52:58.058721] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.526 [2024-11-28 18:52:58.065937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.786 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.046 "name": "raid_bdev1", 00:12:29.046 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:29.046 "strip_size_kb": 0, 00:12:29.046 "state": "online", 00:12:29.046 "raid_level": "raid1", 00:12:29.046 "superblock": true, 00:12:29.046 "num_base_bdevs": 2, 00:12:29.046 "num_base_bdevs_discovered": 2, 00:12:29.046 "num_base_bdevs_operational": 2, 00:12:29.046 "base_bdevs_list": [ 00:12:29.046 { 00:12:29.046 "name": "spare", 00:12:29.046 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:29.046 "is_configured": true, 00:12:29.046 "data_offset": 2048, 00:12:29.046 "data_size": 63488 00:12:29.046 }, 00:12:29.046 { 00:12:29.046 "name": "BaseBdev2", 00:12:29.046 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:29.046 "is_configured": true, 00:12:29.046 "data_offset": 2048, 00:12:29.046 "data_size": 63488 00:12:29.046 } 00:12:29.046 ] 00:12:29.046 }' 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.046 96.29 IOPS, 288.86 MiB/s [2024-11-28T18:52:58.652Z] 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.046 "name": "raid_bdev1", 00:12:29.046 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:29.046 "strip_size_kb": 0, 00:12:29.046 "state": "online", 00:12:29.046 "raid_level": "raid1", 00:12:29.046 "superblock": true, 00:12:29.046 "num_base_bdevs": 2, 00:12:29.046 "num_base_bdevs_discovered": 2, 00:12:29.046 "num_base_bdevs_operational": 2, 00:12:29.046 "base_bdevs_list": [ 00:12:29.046 { 00:12:29.046 "name": "spare", 00:12:29.046 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:29.046 "is_configured": true, 00:12:29.046 "data_offset": 2048, 00:12:29.046 "data_size": 63488 00:12:29.046 }, 00:12:29.046 { 00:12:29.046 "name": "BaseBdev2", 00:12:29.046 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:29.046 "is_configured": true, 00:12:29.046 "data_offset": 2048, 00:12:29.046 "data_size": 63488 00:12:29.046 } 00:12:29.046 ] 00:12:29.046 }' 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.046 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.306 "name": "raid_bdev1", 00:12:29.306 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:29.306 "strip_size_kb": 0, 00:12:29.306 "state": "online", 00:12:29.306 "raid_level": "raid1", 00:12:29.306 "superblock": true, 00:12:29.306 "num_base_bdevs": 2, 00:12:29.306 "num_base_bdevs_discovered": 2, 00:12:29.306 "num_base_bdevs_operational": 2, 00:12:29.306 "base_bdevs_list": [ 00:12:29.306 { 00:12:29.306 "name": "spare", 00:12:29.306 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:29.306 "is_configured": true, 00:12:29.306 "data_offset": 2048, 00:12:29.306 "data_size": 63488 00:12:29.306 }, 00:12:29.306 { 00:12:29.306 "name": "BaseBdev2", 00:12:29.306 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:29.306 "is_configured": true, 00:12:29.306 "data_offset": 2048, 00:12:29.306 "data_size": 63488 00:12:29.306 } 00:12:29.306 ] 00:12:29.306 }' 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.306 18:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.566 [2024-11-28 18:52:59.079294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.566 [2024-11-28 18:52:59.079408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.566 00:12:29.566 Latency(us) 00:12:29.566 [2024-11-28T18:52:59.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.566 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:29.566 raid_bdev1 : 7.70 91.13 273.38 0.00 0.00 15061.63 276.68 111959.00 00:12:29.566 [2024-11-28T18:52:59.172Z] =================================================================================================================== 00:12:29.566 [2024-11-28T18:52:59.172Z] Total : 91.13 273.38 0.00 0.00 15061.63 276.68 111959.00 00:12:29.566 [2024-11-28 18:52:59.150338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.566 [2024-11-28 18:52:59.150424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.566 [2024-11-28 18:52:59.150540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.566 [2024-11-28 18:52:59.150589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:29.566 { 00:12:29.566 "results": [ 00:12:29.566 { 00:12:29.566 "job": "raid_bdev1", 00:12:29.566 "core_mask": "0x1", 00:12:29.566 "workload": "randrw", 00:12:29.566 "percentage": 50, 00:12:29.566 "status": "finished", 00:12:29.566 "queue_depth": 2, 00:12:29.566 "io_size": 3145728, 00:12:29.566 "runtime": 7.703543, 00:12:29.566 "iops": 91.12690095972724, 00:12:29.566 "mibps": 273.38070287918174, 00:12:29.566 "io_failed": 0, 00:12:29.566 "io_timeout": 0, 00:12:29.566 "avg_latency_us": 15061.630430221036, 00:12:29.566 "min_latency_us": 276.6843894360673, 00:12:29.566 "max_latency_us": 111958.99938987187 00:12:29.566 } 00:12:29.566 ], 00:12:29.566 "core_count": 1 00:12:29.566 } 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.566 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.827 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:29.827 /dev/nbd0 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.088 1+0 records in 00:12:30.088 1+0 records out 00:12:30.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417067 s, 9.8 MB/s 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.088 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:30.088 /dev/nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.349 1+0 records in 00:12:30.349 1+0 records out 00:12:30.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545421 s, 7.5 MB/s 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.349 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.610 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.610 18:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.610 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.870 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.870 [2024-11-28 18:53:00.242569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.871 [2024-11-28 18:53:00.242620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.871 [2024-11-28 18:53:00.242655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:30.871 [2024-11-28 18:53:00.242666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.871 [2024-11-28 18:53:00.244829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.871 spare 00:12:30.871 [2024-11-28 18:53:00.244914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.871 [2024-11-28 18:53:00.245002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:30.871 [2024-11-28 18:53:00.245051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.871 [2024-11-28 18:53:00.245175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.871 [2024-11-28 18:53:00.345246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.871 [2024-11-28 18:53:00.345316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.871 [2024-11-28 18:53:00.345612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:12:30.871 [2024-11-28 18:53:00.345802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.871 [2024-11-28 18:53:00.345854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.871 [2024-11-28 18:53:00.346011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.871 "name": "raid_bdev1", 00:12:30.871 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:30.871 "strip_size_kb": 0, 00:12:30.871 "state": "online", 00:12:30.871 "raid_level": "raid1", 00:12:30.871 "superblock": true, 00:12:30.871 "num_base_bdevs": 2, 00:12:30.871 "num_base_bdevs_discovered": 2, 00:12:30.871 "num_base_bdevs_operational": 2, 00:12:30.871 "base_bdevs_list": [ 00:12:30.871 { 00:12:30.871 "name": "spare", 00:12:30.871 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:30.871 "is_configured": true, 00:12:30.871 "data_offset": 2048, 00:12:30.871 "data_size": 63488 00:12:30.871 }, 00:12:30.871 { 00:12:30.871 "name": "BaseBdev2", 00:12:30.871 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:30.871 "is_configured": true, 00:12:30.871 "data_offset": 2048, 00:12:30.871 "data_size": 63488 00:12:30.871 } 00:12:30.871 ] 00:12:30.871 }' 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.871 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.131 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.131 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.131 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.131 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.131 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.391 "name": "raid_bdev1", 00:12:31.391 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:31.391 "strip_size_kb": 0, 00:12:31.391 "state": "online", 00:12:31.391 "raid_level": "raid1", 00:12:31.391 "superblock": true, 00:12:31.391 "num_base_bdevs": 2, 00:12:31.391 "num_base_bdevs_discovered": 2, 00:12:31.391 "num_base_bdevs_operational": 2, 00:12:31.391 "base_bdevs_list": [ 00:12:31.391 { 00:12:31.391 "name": "spare", 00:12:31.391 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:31.391 "is_configured": true, 00:12:31.391 "data_offset": 2048, 00:12:31.391 "data_size": 63488 00:12:31.391 }, 00:12:31.391 { 00:12:31.391 "name": "BaseBdev2", 00:12:31.391 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:31.391 "is_configured": true, 00:12:31.391 "data_offset": 2048, 00:12:31.391 "data_size": 63488 00:12:31.391 } 00:12:31.391 ] 00:12:31.391 }' 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:31.391 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.392 [2024-11-28 18:53:00.878828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.392 "name": "raid_bdev1", 00:12:31.392 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:31.392 "strip_size_kb": 0, 00:12:31.392 "state": "online", 00:12:31.392 "raid_level": "raid1", 00:12:31.392 "superblock": true, 00:12:31.392 "num_base_bdevs": 2, 00:12:31.392 "num_base_bdevs_discovered": 1, 00:12:31.392 "num_base_bdevs_operational": 1, 00:12:31.392 "base_bdevs_list": [ 00:12:31.392 { 00:12:31.392 "name": null, 00:12:31.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.392 "is_configured": false, 00:12:31.392 "data_offset": 0, 00:12:31.392 "data_size": 63488 00:12:31.392 }, 00:12:31.392 { 00:12:31.392 "name": "BaseBdev2", 00:12:31.392 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:31.392 "is_configured": true, 00:12:31.392 "data_offset": 2048, 00:12:31.392 "data_size": 63488 00:12:31.392 } 00:12:31.392 ] 00:12:31.392 }' 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.392 18:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.962 18:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.962 18:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.962 18:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.962 [2024-11-28 18:53:01.339003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.962 [2024-11-28 18:53:01.339206] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:31.962 [2024-11-28 18:53:01.339223] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:31.962 [2024-11-28 18:53:01.339257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.962 [2024-11-28 18:53:01.344541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:12:31.962 18:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.962 18:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:31.962 [2024-11-28 18:53:01.346583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.903 "name": "raid_bdev1", 00:12:32.903 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:32.903 "strip_size_kb": 0, 00:12:32.903 "state": "online", 00:12:32.903 "raid_level": "raid1", 00:12:32.903 "superblock": true, 00:12:32.903 "num_base_bdevs": 2, 00:12:32.903 "num_base_bdevs_discovered": 2, 00:12:32.903 "num_base_bdevs_operational": 2, 00:12:32.903 "process": { 00:12:32.903 "type": "rebuild", 00:12:32.903 "target": "spare", 00:12:32.903 "progress": { 00:12:32.903 "blocks": 20480, 00:12:32.903 "percent": 32 00:12:32.903 } 00:12:32.903 }, 00:12:32.903 "base_bdevs_list": [ 00:12:32.903 { 00:12:32.903 "name": "spare", 00:12:32.903 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:32.903 "is_configured": true, 00:12:32.903 "data_offset": 2048, 00:12:32.903 "data_size": 63488 00:12:32.903 }, 00:12:32.903 { 00:12:32.903 "name": "BaseBdev2", 00:12:32.903 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:32.903 "is_configured": true, 00:12:32.903 "data_offset": 2048, 00:12:32.903 "data_size": 63488 00:12:32.903 } 00:12:32.903 ] 00:12:32.903 }' 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.903 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.903 [2024-11-28 18:53:02.501846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.163 [2024-11-28 18:53:02.552878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.163 [2024-11-28 18:53:02.552999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.163 [2024-11-28 18:53:02.553036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.164 [2024-11-28 18:53:02.553068] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.164 "name": "raid_bdev1", 00:12:33.164 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:33.164 "strip_size_kb": 0, 00:12:33.164 "state": "online", 00:12:33.164 "raid_level": "raid1", 00:12:33.164 "superblock": true, 00:12:33.164 "num_base_bdevs": 2, 00:12:33.164 "num_base_bdevs_discovered": 1, 00:12:33.164 "num_base_bdevs_operational": 1, 00:12:33.164 "base_bdevs_list": [ 00:12:33.164 { 00:12:33.164 "name": null, 00:12:33.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.164 "is_configured": false, 00:12:33.164 "data_offset": 0, 00:12:33.164 "data_size": 63488 00:12:33.164 }, 00:12:33.164 { 00:12:33.164 "name": "BaseBdev2", 00:12:33.164 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:33.164 "is_configured": true, 00:12:33.164 "data_offset": 2048, 00:12:33.164 "data_size": 63488 00:12:33.164 } 00:12:33.164 ] 00:12:33.164 }' 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.164 18:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 18:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.424 18:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.424 18:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 [2024-11-28 18:53:03.017897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.424 [2024-11-28 18:53:03.018009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.424 [2024-11-28 18:53:03.018048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:33.424 [2024-11-28 18:53:03.018074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.424 [2024-11-28 18:53:03.018550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.424 [2024-11-28 18:53:03.018609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.424 [2024-11-28 18:53:03.018717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.424 [2024-11-28 18:53:03.018757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:33.424 [2024-11-28 18:53:03.018797] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.424 [2024-11-28 18:53:03.018871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.424 [2024-11-28 18:53:03.023801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:12:33.424 spare 00:12:33.424 18:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.424 18:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:33.424 [2024-11-28 18:53:03.025966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.806 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.807 "name": "raid_bdev1", 00:12:34.807 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:34.807 "strip_size_kb": 0, 00:12:34.807 "state": "online", 00:12:34.807 "raid_level": "raid1", 00:12:34.807 "superblock": true, 00:12:34.807 "num_base_bdevs": 2, 00:12:34.807 "num_base_bdevs_discovered": 2, 00:12:34.807 "num_base_bdevs_operational": 2, 00:12:34.807 "process": { 00:12:34.807 "type": "rebuild", 00:12:34.807 "target": "spare", 00:12:34.807 "progress": { 00:12:34.807 "blocks": 20480, 00:12:34.807 "percent": 32 00:12:34.807 } 00:12:34.807 }, 00:12:34.807 "base_bdevs_list": [ 00:12:34.807 { 00:12:34.807 "name": "spare", 00:12:34.807 "uuid": "bd408403-54d8-5d27-b3ca-f4129c6140fe", 00:12:34.807 "is_configured": true, 00:12:34.807 "data_offset": 2048, 00:12:34.807 "data_size": 63488 00:12:34.807 }, 00:12:34.807 { 00:12:34.807 "name": "BaseBdev2", 00:12:34.807 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:34.807 "is_configured": true, 00:12:34.807 "data_offset": 2048, 00:12:34.807 "data_size": 63488 00:12:34.807 } 00:12:34.807 ] 00:12:34.807 }' 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.807 [2024-11-28 18:53:04.187869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.807 [2024-11-28 18:53:04.232255] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.807 [2024-11-28 18:53:04.232315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.807 [2024-11-28 18:53:04.232329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.807 [2024-11-28 18:53:04.232341] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.807 "name": "raid_bdev1", 00:12:34.807 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:34.807 "strip_size_kb": 0, 00:12:34.807 "state": "online", 00:12:34.807 "raid_level": "raid1", 00:12:34.807 "superblock": true, 00:12:34.807 "num_base_bdevs": 2, 00:12:34.807 "num_base_bdevs_discovered": 1, 00:12:34.807 "num_base_bdevs_operational": 1, 00:12:34.807 "base_bdevs_list": [ 00:12:34.807 { 00:12:34.807 "name": null, 00:12:34.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.807 "is_configured": false, 00:12:34.807 "data_offset": 0, 00:12:34.807 "data_size": 63488 00:12:34.807 }, 00:12:34.807 { 00:12:34.807 "name": "BaseBdev2", 00:12:34.807 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:34.807 "is_configured": true, 00:12:34.807 "data_offset": 2048, 00:12:34.807 "data_size": 63488 00:12:34.807 } 00:12:34.807 ] 00:12:34.807 }' 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.807 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.429 "name": "raid_bdev1", 00:12:35.429 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:35.429 "strip_size_kb": 0, 00:12:35.429 "state": "online", 00:12:35.429 "raid_level": "raid1", 00:12:35.429 "superblock": true, 00:12:35.429 "num_base_bdevs": 2, 00:12:35.429 "num_base_bdevs_discovered": 1, 00:12:35.429 "num_base_bdevs_operational": 1, 00:12:35.429 "base_bdevs_list": [ 00:12:35.429 { 00:12:35.429 "name": null, 00:12:35.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.429 "is_configured": false, 00:12:35.429 "data_offset": 0, 00:12:35.429 "data_size": 63488 00:12:35.429 }, 00:12:35.429 { 00:12:35.429 "name": "BaseBdev2", 00:12:35.429 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:35.429 "is_configured": true, 00:12:35.429 "data_offset": 2048, 00:12:35.429 "data_size": 63488 00:12:35.429 } 00:12:35.429 ] 00:12:35.429 }' 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.429 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.430 [2024-11-28 18:53:04.857213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.430 [2024-11-28 18:53:04.857265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.430 [2024-11-28 18:53:04.857286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:35.430 [2024-11-28 18:53:04.857297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.430 [2024-11-28 18:53:04.857696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.430 [2024-11-28 18:53:04.857721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.430 [2024-11-28 18:53:04.857788] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:35.430 [2024-11-28 18:53:04.857811] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.430 [2024-11-28 18:53:04.857821] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:35.430 [2024-11-28 18:53:04.857849] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:35.430 BaseBdev1 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.430 18:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.434 "name": "raid_bdev1", 00:12:36.434 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:36.434 "strip_size_kb": 0, 00:12:36.434 "state": "online", 00:12:36.434 "raid_level": "raid1", 00:12:36.434 "superblock": true, 00:12:36.434 "num_base_bdevs": 2, 00:12:36.434 "num_base_bdevs_discovered": 1, 00:12:36.434 "num_base_bdevs_operational": 1, 00:12:36.434 "base_bdevs_list": [ 00:12:36.434 { 00:12:36.434 "name": null, 00:12:36.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.434 "is_configured": false, 00:12:36.434 "data_offset": 0, 00:12:36.434 "data_size": 63488 00:12:36.434 }, 00:12:36.434 { 00:12:36.434 "name": "BaseBdev2", 00:12:36.434 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:36.434 "is_configured": true, 00:12:36.434 "data_offset": 2048, 00:12:36.434 "data_size": 63488 00:12:36.434 } 00:12:36.434 ] 00:12:36.434 }' 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.434 18:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.694 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.954 "name": "raid_bdev1", 00:12:36.954 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:36.954 "strip_size_kb": 0, 00:12:36.954 "state": "online", 00:12:36.954 "raid_level": "raid1", 00:12:36.954 "superblock": true, 00:12:36.954 "num_base_bdevs": 2, 00:12:36.954 "num_base_bdevs_discovered": 1, 00:12:36.954 "num_base_bdevs_operational": 1, 00:12:36.954 "base_bdevs_list": [ 00:12:36.954 { 00:12:36.954 "name": null, 00:12:36.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.954 "is_configured": false, 00:12:36.954 "data_offset": 0, 00:12:36.954 "data_size": 63488 00:12:36.954 }, 00:12:36.954 { 00:12:36.954 "name": "BaseBdev2", 00:12:36.954 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:36.954 "is_configured": true, 00:12:36.954 "data_offset": 2048, 00:12:36.954 "data_size": 63488 00:12:36.954 } 00:12:36.954 ] 00:12:36.954 }' 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.954 [2024-11-28 18:53:06.441818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.954 [2024-11-28 18:53:06.441973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.954 [2024-11-28 18:53:06.441985] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.954 request: 00:12:36.954 { 00:12:36.954 "base_bdev": "BaseBdev1", 00:12:36.954 "raid_bdev": "raid_bdev1", 00:12:36.954 "method": "bdev_raid_add_base_bdev", 00:12:36.954 "req_id": 1 00:12:36.954 } 00:12:36.954 Got JSON-RPC error response 00:12:36.954 response: 00:12:36.954 { 00:12:36.954 "code": -22, 00:12:36.954 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:36.954 } 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.954 18:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.893 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.151 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.151 "name": "raid_bdev1", 00:12:38.151 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:38.151 "strip_size_kb": 0, 00:12:38.151 "state": "online", 00:12:38.151 "raid_level": "raid1", 00:12:38.151 "superblock": true, 00:12:38.151 "num_base_bdevs": 2, 00:12:38.152 "num_base_bdevs_discovered": 1, 00:12:38.152 "num_base_bdevs_operational": 1, 00:12:38.152 "base_bdevs_list": [ 00:12:38.152 { 00:12:38.152 "name": null, 00:12:38.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.152 "is_configured": false, 00:12:38.152 "data_offset": 0, 00:12:38.152 "data_size": 63488 00:12:38.152 }, 00:12:38.152 { 00:12:38.152 "name": "BaseBdev2", 00:12:38.152 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:38.152 "is_configured": true, 00:12:38.152 "data_offset": 2048, 00:12:38.152 "data_size": 63488 00:12:38.152 } 00:12:38.152 ] 00:12:38.152 }' 00:12:38.152 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.152 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.410 "name": "raid_bdev1", 00:12:38.410 "uuid": "74227433-7527-4ef7-90c6-acd4847cfa1b", 00:12:38.410 "strip_size_kb": 0, 00:12:38.410 "state": "online", 00:12:38.410 "raid_level": "raid1", 00:12:38.410 "superblock": true, 00:12:38.410 "num_base_bdevs": 2, 00:12:38.410 "num_base_bdevs_discovered": 1, 00:12:38.410 "num_base_bdevs_operational": 1, 00:12:38.410 "base_bdevs_list": [ 00:12:38.410 { 00:12:38.410 "name": null, 00:12:38.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.410 "is_configured": false, 00:12:38.410 "data_offset": 0, 00:12:38.410 "data_size": 63488 00:12:38.410 }, 00:12:38.410 { 00:12:38.410 "name": "BaseBdev2", 00:12:38.410 "uuid": "e49fa4ef-221b-5d90-be3e-f7e1ba67fa15", 00:12:38.410 "is_configured": true, 00:12:38.410 "data_offset": 2048, 00:12:38.410 "data_size": 63488 00:12:38.410 } 00:12:38.410 ] 00:12:38.410 }' 00:12:38.410 18:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 88989 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 88989 ']' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 88989 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88989 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.669 killing process with pid 88989 00:12:38.669 Received shutdown signal, test time was about 16.679247 seconds 00:12:38.669 00:12:38.669 Latency(us) 00:12:38.669 [2024-11-28T18:53:08.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.669 [2024-11-28T18:53:08.275Z] =================================================================================================================== 00:12:38.669 [2024-11-28T18:53:08.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88989' 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 88989 00:12:38.669 [2024-11-28 18:53:08.123645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.669 [2024-11-28 18:53:08.123762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.669 [2024-11-28 18:53:08.123812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.669 [2024-11-28 18:53:08.123822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:38.669 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 88989 00:12:38.669 [2024-11-28 18:53:08.150349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:38.928 00:12:38.928 real 0m18.617s 00:12:38.928 user 0m24.743s 00:12:38.928 sys 0m2.238s 00:12:38.928 ************************************ 00:12:38.928 END TEST raid_rebuild_test_sb_io 00:12:38.928 ************************************ 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.928 18:53:08 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:38.928 18:53:08 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:38.928 18:53:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:38.928 18:53:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.928 18:53:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.928 ************************************ 00:12:38.928 START TEST raid_rebuild_test 00:12:38.928 ************************************ 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.928 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89664 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89664 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 89664 ']' 00:12:38.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.929 18:53:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.188 [2024-11-28 18:53:08.555071] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:39.188 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.188 Zero copy mechanism will not be used. 00:12:39.188 [2024-11-28 18:53:08.555281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89664 ] 00:12:39.188 [2024-11-28 18:53:08.689932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:39.188 [2024-11-28 18:53:08.729715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.188 [2024-11-28 18:53:08.756037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.446 [2024-11-28 18:53:08.799573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.446 [2024-11-28 18:53:08.799613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 BaseBdev1_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.408490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.016 [2024-11-28 18:53:09.408592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.016 [2024-11-28 18:53:09.408635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.016 [2024-11-28 18:53:09.408658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.016 [2024-11-28 18:53:09.410933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.016 [2024-11-28 18:53:09.410969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.016 BaseBdev1 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 BaseBdev2_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.437244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.016 [2024-11-28 18:53:09.437297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.016 [2024-11-28 18:53:09.437331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.016 [2024-11-28 18:53:09.437341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.016 [2024-11-28 18:53:09.439585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.016 [2024-11-28 18:53:09.439675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.016 BaseBdev2 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 BaseBdev3_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.465809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:40.016 [2024-11-28 18:53:09.465858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.016 [2024-11-28 18:53:09.465893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.016 [2024-11-28 18:53:09.465903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.016 [2024-11-28 18:53:09.467970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.016 [2024-11-28 18:53:09.468009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:40.016 BaseBdev3 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 BaseBdev4_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.512952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:40.016 [2024-11-28 18:53:09.513059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.016 [2024-11-28 18:53:09.513091] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:40.016 [2024-11-28 18:53:09.513108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.016 [2024-11-28 18:53:09.516460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.016 [2024-11-28 18:53:09.516514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:40.016 BaseBdev4 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 spare_malloc 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 spare_delay 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.554651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.016 [2024-11-28 18:53:09.554754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.016 [2024-11-28 18:53:09.554775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:40.016 [2024-11-28 18:53:09.554785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.016 [2024-11-28 18:53:09.556854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.016 [2024-11-28 18:53:09.556892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.016 spare 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.016 [2024-11-28 18:53:09.566716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.016 [2024-11-28 18:53:09.568499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.016 [2024-11-28 18:53:09.568611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.016 [2024-11-28 18:53:09.568657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:40.016 [2024-11-28 18:53:09.568738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:40.016 [2024-11-28 18:53:09.568753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.016 [2024-11-28 18:53:09.568982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.016 [2024-11-28 18:53:09.569115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:40.016 [2024-11-28 18:53:09.569125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:40.016 [2024-11-28 18:53:09.569237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.016 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.017 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.017 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.017 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.017 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.276 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.276 "name": "raid_bdev1", 00:12:40.276 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:40.276 "strip_size_kb": 0, 00:12:40.276 "state": "online", 00:12:40.276 "raid_level": "raid1", 00:12:40.276 "superblock": false, 00:12:40.276 "num_base_bdevs": 4, 00:12:40.276 "num_base_bdevs_discovered": 4, 00:12:40.276 "num_base_bdevs_operational": 4, 00:12:40.276 "base_bdevs_list": [ 00:12:40.276 { 00:12:40.276 "name": "BaseBdev1", 00:12:40.276 "uuid": "7a9b8dfb-8df9-53c4-85b8-70672f60943e", 00:12:40.276 "is_configured": true, 00:12:40.276 "data_offset": 0, 00:12:40.276 "data_size": 65536 00:12:40.276 }, 00:12:40.276 { 00:12:40.276 "name": "BaseBdev2", 00:12:40.276 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:40.276 "is_configured": true, 00:12:40.276 "data_offset": 0, 00:12:40.276 "data_size": 65536 00:12:40.276 }, 00:12:40.276 { 00:12:40.276 "name": "BaseBdev3", 00:12:40.276 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:40.276 "is_configured": true, 00:12:40.276 "data_offset": 0, 00:12:40.276 "data_size": 65536 00:12:40.276 }, 00:12:40.276 { 00:12:40.276 "name": "BaseBdev4", 00:12:40.276 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:40.276 "is_configured": true, 00:12:40.276 "data_offset": 0, 00:12:40.276 "data_size": 65536 00:12:40.276 } 00:12:40.276 ] 00:12:40.276 }' 00:12:40.276 18:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.276 18:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.535 [2024-11-28 18:53:10.023075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:40.535 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:40.794 [2024-11-28 18:53:10.294912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:40.794 /dev/nbd0 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.794 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.794 1+0 records in 00:12:40.794 1+0 records out 00:12:40.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393786 s, 10.4 MB/s 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:40.795 18:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:47.366 65536+0 records in 00:12:47.367 65536+0 records out 00:12:47.367 33554432 bytes (34 MB, 32 MiB) copied, 5.54701 s, 6.0 MB/s 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.367 18:53:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.367 [2024-11-28 18:53:16.153442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.367 [2024-11-28 18:53:16.185531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.367 "name": "raid_bdev1", 00:12:47.367 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:47.367 "strip_size_kb": 0, 00:12:47.367 "state": "online", 00:12:47.367 "raid_level": "raid1", 00:12:47.367 "superblock": false, 00:12:47.367 "num_base_bdevs": 4, 00:12:47.367 "num_base_bdevs_discovered": 3, 00:12:47.367 "num_base_bdevs_operational": 3, 00:12:47.367 "base_bdevs_list": [ 00:12:47.367 { 00:12:47.367 "name": null, 00:12:47.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.367 "is_configured": false, 00:12:47.367 "data_offset": 0, 00:12:47.367 "data_size": 65536 00:12:47.367 }, 00:12:47.367 { 00:12:47.367 "name": "BaseBdev2", 00:12:47.367 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:47.367 "is_configured": true, 00:12:47.367 "data_offset": 0, 00:12:47.367 "data_size": 65536 00:12:47.367 }, 00:12:47.367 { 00:12:47.367 "name": "BaseBdev3", 00:12:47.367 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:47.367 "is_configured": true, 00:12:47.367 "data_offset": 0, 00:12:47.367 "data_size": 65536 00:12:47.367 }, 00:12:47.367 { 00:12:47.367 "name": "BaseBdev4", 00:12:47.367 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:47.367 "is_configured": true, 00:12:47.367 "data_offset": 0, 00:12:47.367 "data_size": 65536 00:12:47.367 } 00:12:47.367 ] 00:12:47.367 }' 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.367 [2024-11-28 18:53:16.641654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.367 [2024-11-28 18:53:16.645869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.367 18:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:47.367 [2024-11-28 18:53:16.647750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.303 "name": "raid_bdev1", 00:12:48.303 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:48.303 "strip_size_kb": 0, 00:12:48.303 "state": "online", 00:12:48.303 "raid_level": "raid1", 00:12:48.303 "superblock": false, 00:12:48.303 "num_base_bdevs": 4, 00:12:48.303 "num_base_bdevs_discovered": 4, 00:12:48.303 "num_base_bdevs_operational": 4, 00:12:48.303 "process": { 00:12:48.303 "type": "rebuild", 00:12:48.303 "target": "spare", 00:12:48.303 "progress": { 00:12:48.303 "blocks": 20480, 00:12:48.303 "percent": 31 00:12:48.303 } 00:12:48.303 }, 00:12:48.303 "base_bdevs_list": [ 00:12:48.303 { 00:12:48.303 "name": "spare", 00:12:48.303 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:48.303 "is_configured": true, 00:12:48.303 "data_offset": 0, 00:12:48.303 "data_size": 65536 00:12:48.303 }, 00:12:48.303 { 00:12:48.303 "name": "BaseBdev2", 00:12:48.303 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:48.303 "is_configured": true, 00:12:48.303 "data_offset": 0, 00:12:48.303 "data_size": 65536 00:12:48.303 }, 00:12:48.303 { 00:12:48.303 "name": "BaseBdev3", 00:12:48.303 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:48.303 "is_configured": true, 00:12:48.303 "data_offset": 0, 00:12:48.303 "data_size": 65536 00:12:48.303 }, 00:12:48.303 { 00:12:48.303 "name": "BaseBdev4", 00:12:48.303 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:48.303 "is_configured": true, 00:12:48.303 "data_offset": 0, 00:12:48.303 "data_size": 65536 00:12:48.303 } 00:12:48.303 ] 00:12:48.303 }' 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.303 [2024-11-28 18:53:17.783474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.303 [2024-11-28 18:53:17.854335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.303 [2024-11-28 18:53:17.854402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.303 [2024-11-28 18:53:17.854419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.303 [2024-11-28 18:53:17.854443] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.303 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.304 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.562 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.562 "name": "raid_bdev1", 00:12:48.562 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:48.562 "strip_size_kb": 0, 00:12:48.562 "state": "online", 00:12:48.562 "raid_level": "raid1", 00:12:48.562 "superblock": false, 00:12:48.562 "num_base_bdevs": 4, 00:12:48.562 "num_base_bdevs_discovered": 3, 00:12:48.562 "num_base_bdevs_operational": 3, 00:12:48.562 "base_bdevs_list": [ 00:12:48.562 { 00:12:48.562 "name": null, 00:12:48.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.562 "is_configured": false, 00:12:48.562 "data_offset": 0, 00:12:48.562 "data_size": 65536 00:12:48.562 }, 00:12:48.562 { 00:12:48.562 "name": "BaseBdev2", 00:12:48.562 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:48.562 "is_configured": true, 00:12:48.562 "data_offset": 0, 00:12:48.562 "data_size": 65536 00:12:48.562 }, 00:12:48.562 { 00:12:48.562 "name": "BaseBdev3", 00:12:48.562 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:48.562 "is_configured": true, 00:12:48.562 "data_offset": 0, 00:12:48.562 "data_size": 65536 00:12:48.562 }, 00:12:48.562 { 00:12:48.562 "name": "BaseBdev4", 00:12:48.562 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:48.562 "is_configured": true, 00:12:48.562 "data_offset": 0, 00:12:48.562 "data_size": 65536 00:12:48.562 } 00:12:48.562 ] 00:12:48.562 }' 00:12:48.562 18:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.562 18:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.821 "name": "raid_bdev1", 00:12:48.821 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:48.821 "strip_size_kb": 0, 00:12:48.821 "state": "online", 00:12:48.821 "raid_level": "raid1", 00:12:48.821 "superblock": false, 00:12:48.821 "num_base_bdevs": 4, 00:12:48.821 "num_base_bdevs_discovered": 3, 00:12:48.821 "num_base_bdevs_operational": 3, 00:12:48.821 "base_bdevs_list": [ 00:12:48.821 { 00:12:48.821 "name": null, 00:12:48.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.821 "is_configured": false, 00:12:48.821 "data_offset": 0, 00:12:48.821 "data_size": 65536 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "BaseBdev2", 00:12:48.821 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:48.821 "is_configured": true, 00:12:48.821 "data_offset": 0, 00:12:48.821 "data_size": 65536 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "BaseBdev3", 00:12:48.821 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:48.821 "is_configured": true, 00:12:48.821 "data_offset": 0, 00:12:48.821 "data_size": 65536 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "BaseBdev4", 00:12:48.821 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:48.821 "is_configured": true, 00:12:48.821 "data_offset": 0, 00:12:48.821 "data_size": 65536 00:12:48.821 } 00:12:48.821 ] 00:12:48.821 }' 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.821 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.079 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.079 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.079 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.079 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.079 [2024-11-28 18:53:18.474826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.079 [2024-11-28 18:53:18.478774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:12:49.080 18:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.080 18:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:49.080 [2024-11-28 18:53:18.480678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.018 "name": "raid_bdev1", 00:12:50.018 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:50.018 "strip_size_kb": 0, 00:12:50.018 "state": "online", 00:12:50.018 "raid_level": "raid1", 00:12:50.018 "superblock": false, 00:12:50.018 "num_base_bdevs": 4, 00:12:50.018 "num_base_bdevs_discovered": 4, 00:12:50.018 "num_base_bdevs_operational": 4, 00:12:50.018 "process": { 00:12:50.018 "type": "rebuild", 00:12:50.018 "target": "spare", 00:12:50.018 "progress": { 00:12:50.018 "blocks": 20480, 00:12:50.018 "percent": 31 00:12:50.018 } 00:12:50.018 }, 00:12:50.018 "base_bdevs_list": [ 00:12:50.018 { 00:12:50.018 "name": "spare", 00:12:50.018 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:50.018 "is_configured": true, 00:12:50.018 "data_offset": 0, 00:12:50.018 "data_size": 65536 00:12:50.018 }, 00:12:50.018 { 00:12:50.018 "name": "BaseBdev2", 00:12:50.018 "uuid": "4a053bf6-a2bd-58a0-ae89-4a6b8134e915", 00:12:50.018 "is_configured": true, 00:12:50.018 "data_offset": 0, 00:12:50.018 "data_size": 65536 00:12:50.018 }, 00:12:50.018 { 00:12:50.018 "name": "BaseBdev3", 00:12:50.018 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:50.018 "is_configured": true, 00:12:50.018 "data_offset": 0, 00:12:50.018 "data_size": 65536 00:12:50.018 }, 00:12:50.018 { 00:12:50.018 "name": "BaseBdev4", 00:12:50.018 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:50.018 "is_configured": true, 00:12:50.018 "data_offset": 0, 00:12:50.018 "data_size": 65536 00:12:50.018 } 00:12:50.018 ] 00:12:50.018 }' 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.018 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.278 [2024-11-28 18:53:19.640191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.278 [2024-11-28 18:53:19.686766] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.278 "name": "raid_bdev1", 00:12:50.278 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:50.278 "strip_size_kb": 0, 00:12:50.278 "state": "online", 00:12:50.278 "raid_level": "raid1", 00:12:50.278 "superblock": false, 00:12:50.278 "num_base_bdevs": 4, 00:12:50.278 "num_base_bdevs_discovered": 3, 00:12:50.278 "num_base_bdevs_operational": 3, 00:12:50.278 "process": { 00:12:50.278 "type": "rebuild", 00:12:50.278 "target": "spare", 00:12:50.278 "progress": { 00:12:50.278 "blocks": 24576, 00:12:50.278 "percent": 37 00:12:50.278 } 00:12:50.278 }, 00:12:50.278 "base_bdevs_list": [ 00:12:50.278 { 00:12:50.278 "name": "spare", 00:12:50.278 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:50.278 "is_configured": true, 00:12:50.278 "data_offset": 0, 00:12:50.278 "data_size": 65536 00:12:50.278 }, 00:12:50.278 { 00:12:50.278 "name": null, 00:12:50.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.278 "is_configured": false, 00:12:50.278 "data_offset": 0, 00:12:50.278 "data_size": 65536 00:12:50.278 }, 00:12:50.278 { 00:12:50.278 "name": "BaseBdev3", 00:12:50.278 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:50.278 "is_configured": true, 00:12:50.278 "data_offset": 0, 00:12:50.278 "data_size": 65536 00:12:50.278 }, 00:12:50.278 { 00:12:50.278 "name": "BaseBdev4", 00:12:50.278 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:50.278 "is_configured": true, 00:12:50.278 "data_offset": 0, 00:12:50.278 "data_size": 65536 00:12:50.278 } 00:12:50.278 ] 00:12:50.278 }' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=352 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.278 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.279 "name": "raid_bdev1", 00:12:50.279 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:50.279 "strip_size_kb": 0, 00:12:50.279 "state": "online", 00:12:50.279 "raid_level": "raid1", 00:12:50.279 "superblock": false, 00:12:50.279 "num_base_bdevs": 4, 00:12:50.279 "num_base_bdevs_discovered": 3, 00:12:50.279 "num_base_bdevs_operational": 3, 00:12:50.279 "process": { 00:12:50.279 "type": "rebuild", 00:12:50.279 "target": "spare", 00:12:50.279 "progress": { 00:12:50.279 "blocks": 26624, 00:12:50.279 "percent": 40 00:12:50.279 } 00:12:50.279 }, 00:12:50.279 "base_bdevs_list": [ 00:12:50.279 { 00:12:50.279 "name": "spare", 00:12:50.279 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:50.279 "is_configured": true, 00:12:50.279 "data_offset": 0, 00:12:50.279 "data_size": 65536 00:12:50.279 }, 00:12:50.279 { 00:12:50.279 "name": null, 00:12:50.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.279 "is_configured": false, 00:12:50.279 "data_offset": 0, 00:12:50.279 "data_size": 65536 00:12:50.279 }, 00:12:50.279 { 00:12:50.279 "name": "BaseBdev3", 00:12:50.279 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:50.279 "is_configured": true, 00:12:50.279 "data_offset": 0, 00:12:50.279 "data_size": 65536 00:12:50.279 }, 00:12:50.279 { 00:12:50.279 "name": "BaseBdev4", 00:12:50.279 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:50.279 "is_configured": true, 00:12:50.279 "data_offset": 0, 00:12:50.279 "data_size": 65536 00:12:50.279 } 00:12:50.279 ] 00:12:50.279 }' 00:12:50.279 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.538 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.538 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.538 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.538 18:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.477 18:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.477 "name": "raid_bdev1", 00:12:51.477 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:51.477 "strip_size_kb": 0, 00:12:51.477 "state": "online", 00:12:51.477 "raid_level": "raid1", 00:12:51.477 "superblock": false, 00:12:51.477 "num_base_bdevs": 4, 00:12:51.477 "num_base_bdevs_discovered": 3, 00:12:51.477 "num_base_bdevs_operational": 3, 00:12:51.477 "process": { 00:12:51.477 "type": "rebuild", 00:12:51.477 "target": "spare", 00:12:51.477 "progress": { 00:12:51.477 "blocks": 49152, 00:12:51.477 "percent": 75 00:12:51.477 } 00:12:51.477 }, 00:12:51.477 "base_bdevs_list": [ 00:12:51.477 { 00:12:51.477 "name": "spare", 00:12:51.477 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:51.477 "is_configured": true, 00:12:51.477 "data_offset": 0, 00:12:51.477 "data_size": 65536 00:12:51.477 }, 00:12:51.477 { 00:12:51.477 "name": null, 00:12:51.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.477 "is_configured": false, 00:12:51.477 "data_offset": 0, 00:12:51.477 "data_size": 65536 00:12:51.477 }, 00:12:51.477 { 00:12:51.477 "name": "BaseBdev3", 00:12:51.477 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:51.477 "is_configured": true, 00:12:51.477 "data_offset": 0, 00:12:51.477 "data_size": 65536 00:12:51.477 }, 00:12:51.477 { 00:12:51.477 "name": "BaseBdev4", 00:12:51.477 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:51.477 "is_configured": true, 00:12:51.477 "data_offset": 0, 00:12:51.477 "data_size": 65536 00:12:51.477 } 00:12:51.477 ] 00:12:51.477 }' 00:12:51.477 18:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.477 18:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.477 18:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.737 18:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.737 18:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.306 [2024-11-28 18:53:21.697150] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:52.306 [2024-11-28 18:53:21.697309] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:52.306 [2024-11-28 18:53:21.697397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.566 "name": "raid_bdev1", 00:12:52.566 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:52.566 "strip_size_kb": 0, 00:12:52.566 "state": "online", 00:12:52.566 "raid_level": "raid1", 00:12:52.566 "superblock": false, 00:12:52.566 "num_base_bdevs": 4, 00:12:52.566 "num_base_bdevs_discovered": 3, 00:12:52.566 "num_base_bdevs_operational": 3, 00:12:52.566 "base_bdevs_list": [ 00:12:52.566 { 00:12:52.566 "name": "spare", 00:12:52.566 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:52.566 "is_configured": true, 00:12:52.566 "data_offset": 0, 00:12:52.566 "data_size": 65536 00:12:52.566 }, 00:12:52.566 { 00:12:52.566 "name": null, 00:12:52.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.566 "is_configured": false, 00:12:52.566 "data_offset": 0, 00:12:52.566 "data_size": 65536 00:12:52.566 }, 00:12:52.566 { 00:12:52.566 "name": "BaseBdev3", 00:12:52.566 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:52.566 "is_configured": true, 00:12:52.566 "data_offset": 0, 00:12:52.566 "data_size": 65536 00:12:52.566 }, 00:12:52.566 { 00:12:52.566 "name": "BaseBdev4", 00:12:52.566 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:52.566 "is_configured": true, 00:12:52.566 "data_offset": 0, 00:12:52.566 "data_size": 65536 00:12:52.566 } 00:12:52.566 ] 00:12:52.566 }' 00:12:52.566 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.826 "name": "raid_bdev1", 00:12:52.826 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:52.826 "strip_size_kb": 0, 00:12:52.826 "state": "online", 00:12:52.826 "raid_level": "raid1", 00:12:52.826 "superblock": false, 00:12:52.826 "num_base_bdevs": 4, 00:12:52.826 "num_base_bdevs_discovered": 3, 00:12:52.826 "num_base_bdevs_operational": 3, 00:12:52.826 "base_bdevs_list": [ 00:12:52.826 { 00:12:52.826 "name": "spare", 00:12:52.826 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:52.826 "is_configured": true, 00:12:52.826 "data_offset": 0, 00:12:52.826 "data_size": 65536 00:12:52.826 }, 00:12:52.826 { 00:12:52.826 "name": null, 00:12:52.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.826 "is_configured": false, 00:12:52.826 "data_offset": 0, 00:12:52.826 "data_size": 65536 00:12:52.826 }, 00:12:52.826 { 00:12:52.826 "name": "BaseBdev3", 00:12:52.826 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:52.826 "is_configured": true, 00:12:52.826 "data_offset": 0, 00:12:52.826 "data_size": 65536 00:12:52.826 }, 00:12:52.826 { 00:12:52.826 "name": "BaseBdev4", 00:12:52.826 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:52.826 "is_configured": true, 00:12:52.826 "data_offset": 0, 00:12:52.826 "data_size": 65536 00:12:52.826 } 00:12:52.826 ] 00:12:52.826 }' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.826 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.826 "name": "raid_bdev1", 00:12:52.826 "uuid": "8cbb0860-fa63-43ab-9553-db3b500d8ba8", 00:12:52.826 "strip_size_kb": 0, 00:12:52.826 "state": "online", 00:12:52.826 "raid_level": "raid1", 00:12:52.826 "superblock": false, 00:12:52.826 "num_base_bdevs": 4, 00:12:52.826 "num_base_bdevs_discovered": 3, 00:12:52.826 "num_base_bdevs_operational": 3, 00:12:52.826 "base_bdevs_list": [ 00:12:52.826 { 00:12:52.826 "name": "spare", 00:12:52.826 "uuid": "94fa1b8a-65b1-589b-bbe9-b75f43841a15", 00:12:52.827 "is_configured": true, 00:12:52.827 "data_offset": 0, 00:12:52.827 "data_size": 65536 00:12:52.827 }, 00:12:52.827 { 00:12:52.827 "name": null, 00:12:52.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.827 "is_configured": false, 00:12:52.827 "data_offset": 0, 00:12:52.827 "data_size": 65536 00:12:52.827 }, 00:12:52.827 { 00:12:52.827 "name": "BaseBdev3", 00:12:52.827 "uuid": "f891fdbf-1b2a-53f0-b6a2-7d088e87ae57", 00:12:52.827 "is_configured": true, 00:12:52.827 "data_offset": 0, 00:12:52.827 "data_size": 65536 00:12:52.827 }, 00:12:52.827 { 00:12:52.827 "name": "BaseBdev4", 00:12:52.827 "uuid": "4d96fdb6-64f1-5eb3-968e-6b76dbd43dc3", 00:12:52.827 "is_configured": true, 00:12:52.827 "data_offset": 0, 00:12:52.827 "data_size": 65536 00:12:52.827 } 00:12:52.827 ] 00:12:52.827 }' 00:12:52.827 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.827 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.396 [2024-11-28 18:53:22.805700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.396 [2024-11-28 18:53:22.805774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.396 [2024-11-28 18:53:22.805878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.396 [2024-11-28 18:53:22.805976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.396 [2024-11-28 18:53:22.806023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.396 18:53:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:53.656 /dev/nbd0 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.656 1+0 records in 00:12:53.656 1+0 records out 00:12:53.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631588 s, 6.5 MB/s 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.656 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:53.916 /dev/nbd1 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.916 1+0 records in 00:12:53.916 1+0 records out 00:12:53.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393542 s, 10.4 MB/s 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.916 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.175 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89664 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 89664 ']' 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 89664 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89664 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89664' 00:12:54.434 killing process with pid 89664 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 89664 00:12:54.434 Received shutdown signal, test time was about 60.000000 seconds 00:12:54.434 00:12:54.434 Latency(us) 00:12:54.434 [2024-11-28T18:53:24.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.434 [2024-11-28T18:53:24.040Z] =================================================================================================================== 00:12:54.434 [2024-11-28T18:53:24.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:54.434 [2024-11-28 18:53:23.916136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.434 18:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 89664 00:12:54.434 [2024-11-28 18:53:23.966599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:54.720 00:12:54.720 real 0m15.732s 00:12:54.720 user 0m17.737s 00:12:54.720 sys 0m3.333s 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.720 ************************************ 00:12:54.720 END TEST raid_rebuild_test 00:12:54.720 ************************************ 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.720 18:53:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:54.720 18:53:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:54.720 18:53:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.720 18:53:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.720 ************************************ 00:12:54.720 START TEST raid_rebuild_test_sb 00:12:54.720 ************************************ 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90089 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90089 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90089 ']' 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.720 18:53:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.980 [2024-11-28 18:53:24.357767] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:54.980 [2024-11-28 18:53:24.357985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:54.980 Zero copy mechanism will not be used. 00:12:54.980 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90089 ] 00:12:54.980 [2024-11-28 18:53:24.492219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:54.980 [2024-11-28 18:53:24.515723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.980 [2024-11-28 18:53:24.540725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.238 [2024-11-28 18:53:24.584153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.238 [2024-11-28 18:53:24.584196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 BaseBdev1_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.221730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:55.808 [2024-11-28 18:53:25.221845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.808 [2024-11-28 18:53:25.221870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:55.808 [2024-11-28 18:53:25.221885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.808 [2024-11-28 18:53:25.224012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.808 [2024-11-28 18:53:25.224051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:55.808 BaseBdev1 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 BaseBdev2_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.250405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:55.808 [2024-11-28 18:53:25.250466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.808 [2024-11-28 18:53:25.250499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:55.808 [2024-11-28 18:53:25.250509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.808 [2024-11-28 18:53:25.252579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.808 [2024-11-28 18:53:25.252669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:55.808 BaseBdev2 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 BaseBdev3_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.278982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:55.808 [2024-11-28 18:53:25.279033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.808 [2024-11-28 18:53:25.279066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.808 [2024-11-28 18:53:25.279076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.808 [2024-11-28 18:53:25.281096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.808 [2024-11-28 18:53:25.281137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:55.808 BaseBdev3 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 BaseBdev4_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.322651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:55.808 [2024-11-28 18:53:25.322760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.808 [2024-11-28 18:53:25.322805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:55.808 [2024-11-28 18:53:25.322828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.808 [2024-11-28 18:53:25.327049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.808 [2024-11-28 18:53:25.327101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:55.808 BaseBdev4 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 spare_malloc 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 spare_delay 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.364750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.808 [2024-11-28 18:53:25.364798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.808 [2024-11-28 18:53:25.364832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:55.808 [2024-11-28 18:53:25.364842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.808 [2024-11-28 18:53:25.367113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.808 [2024-11-28 18:53:25.367189] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.808 spare 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.808 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.808 [2024-11-28 18:53:25.376823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.808 [2024-11-28 18:53:25.378800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.808 [2024-11-28 18:53:25.378918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.808 [2024-11-28 18:53:25.378981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:55.809 [2024-11-28 18:53:25.379214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:55.809 [2024-11-28 18:53:25.379269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.809 [2024-11-28 18:53:25.379533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:55.809 [2024-11-28 18:53:25.379678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:55.809 [2024-11-28 18:53:25.379689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:55.809 [2024-11-28 18:53:25.379803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.809 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.068 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.068 "name": "raid_bdev1", 00:12:56.068 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:12:56.068 "strip_size_kb": 0, 00:12:56.068 "state": "online", 00:12:56.068 "raid_level": "raid1", 00:12:56.068 "superblock": true, 00:12:56.068 "num_base_bdevs": 4, 00:12:56.068 "num_base_bdevs_discovered": 4, 00:12:56.068 "num_base_bdevs_operational": 4, 00:12:56.068 "base_bdevs_list": [ 00:12:56.068 { 00:12:56.068 "name": "BaseBdev1", 00:12:56.068 "uuid": "bb971a6b-1f94-5204-8535-26f5971d0af9", 00:12:56.069 "is_configured": true, 00:12:56.069 "data_offset": 2048, 00:12:56.069 "data_size": 63488 00:12:56.069 }, 00:12:56.069 { 00:12:56.069 "name": "BaseBdev2", 00:12:56.069 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:12:56.069 "is_configured": true, 00:12:56.069 "data_offset": 2048, 00:12:56.069 "data_size": 63488 00:12:56.069 }, 00:12:56.069 { 00:12:56.069 "name": "BaseBdev3", 00:12:56.069 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:12:56.069 "is_configured": true, 00:12:56.069 "data_offset": 2048, 00:12:56.069 "data_size": 63488 00:12:56.069 }, 00:12:56.069 { 00:12:56.069 "name": "BaseBdev4", 00:12:56.069 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:12:56.069 "is_configured": true, 00:12:56.069 "data_offset": 2048, 00:12:56.069 "data_size": 63488 00:12:56.069 } 00:12:56.069 ] 00:12:56.069 }' 00:12:56.069 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.069 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.328 [2024-11-28 18:53:25.861200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:56.328 18:53:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.588 18:53:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:56.588 [2024-11-28 18:53:26.113062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:56.588 /dev/nbd0 00:12:56.588 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.588 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.589 1+0 records in 00:12:56.589 1+0 records out 00:12:56.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296364 s, 13.8 MB/s 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:56.589 18:53:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:03.176 63488+0 records in 00:13:03.176 63488+0 records out 00:13:03.176 32505856 bytes (33 MB, 31 MiB) copied, 5.61307 s, 5.8 MB/s 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.176 [2024-11-28 18:53:31.979630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.176 18:53:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.176 [2024-11-28 18:53:32.015658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.176 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.176 "name": "raid_bdev1", 00:13:03.176 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:03.176 "strip_size_kb": 0, 00:13:03.176 "state": "online", 00:13:03.176 "raid_level": "raid1", 00:13:03.176 "superblock": true, 00:13:03.176 "num_base_bdevs": 4, 00:13:03.176 "num_base_bdevs_discovered": 3, 00:13:03.176 "num_base_bdevs_operational": 3, 00:13:03.176 "base_bdevs_list": [ 00:13:03.176 { 00:13:03.176 "name": null, 00:13:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.177 "is_configured": false, 00:13:03.177 "data_offset": 0, 00:13:03.177 "data_size": 63488 00:13:03.177 }, 00:13:03.177 { 00:13:03.177 "name": "BaseBdev2", 00:13:03.177 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:13:03.177 "is_configured": true, 00:13:03.177 "data_offset": 2048, 00:13:03.177 "data_size": 63488 00:13:03.177 }, 00:13:03.177 { 00:13:03.177 "name": "BaseBdev3", 00:13:03.177 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:03.177 "is_configured": true, 00:13:03.177 "data_offset": 2048, 00:13:03.177 "data_size": 63488 00:13:03.177 }, 00:13:03.177 { 00:13:03.177 "name": "BaseBdev4", 00:13:03.177 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:03.177 "is_configured": true, 00:13:03.177 "data_offset": 2048, 00:13:03.177 "data_size": 63488 00:13:03.177 } 00:13:03.177 ] 00:13:03.177 }' 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.177 [2024-11-28 18:53:32.471812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.177 [2024-11-28 18:53:32.476064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.177 18:53:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:03.177 [2024-11-28 18:53:32.477936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.116 "name": "raid_bdev1", 00:13:04.116 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:04.116 "strip_size_kb": 0, 00:13:04.116 "state": "online", 00:13:04.116 "raid_level": "raid1", 00:13:04.116 "superblock": true, 00:13:04.116 "num_base_bdevs": 4, 00:13:04.116 "num_base_bdevs_discovered": 4, 00:13:04.116 "num_base_bdevs_operational": 4, 00:13:04.116 "process": { 00:13:04.116 "type": "rebuild", 00:13:04.116 "target": "spare", 00:13:04.116 "progress": { 00:13:04.116 "blocks": 20480, 00:13:04.116 "percent": 32 00:13:04.116 } 00:13:04.116 }, 00:13:04.116 "base_bdevs_list": [ 00:13:04.116 { 00:13:04.116 "name": "spare", 00:13:04.116 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:04.116 "is_configured": true, 00:13:04.116 "data_offset": 2048, 00:13:04.116 "data_size": 63488 00:13:04.116 }, 00:13:04.116 { 00:13:04.116 "name": "BaseBdev2", 00:13:04.116 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:13:04.116 "is_configured": true, 00:13:04.116 "data_offset": 2048, 00:13:04.116 "data_size": 63488 00:13:04.116 }, 00:13:04.116 { 00:13:04.116 "name": "BaseBdev3", 00:13:04.116 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:04.116 "is_configured": true, 00:13:04.116 "data_offset": 2048, 00:13:04.116 "data_size": 63488 00:13:04.116 }, 00:13:04.116 { 00:13:04.116 "name": "BaseBdev4", 00:13:04.116 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:04.116 "is_configured": true, 00:13:04.116 "data_offset": 2048, 00:13:04.116 "data_size": 63488 00:13:04.116 } 00:13:04.116 ] 00:13:04.116 }' 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.116 [2024-11-28 18:53:33.621253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.116 [2024-11-28 18:53:33.684632] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.116 [2024-11-28 18:53:33.684734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.116 [2024-11-28 18:53:33.684753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.116 [2024-11-28 18:53:33.684765] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.116 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.117 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.376 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.376 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.376 "name": "raid_bdev1", 00:13:04.376 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:04.376 "strip_size_kb": 0, 00:13:04.376 "state": "online", 00:13:04.376 "raid_level": "raid1", 00:13:04.376 "superblock": true, 00:13:04.376 "num_base_bdevs": 4, 00:13:04.376 "num_base_bdevs_discovered": 3, 00:13:04.376 "num_base_bdevs_operational": 3, 00:13:04.376 "base_bdevs_list": [ 00:13:04.376 { 00:13:04.376 "name": null, 00:13:04.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.376 "is_configured": false, 00:13:04.376 "data_offset": 0, 00:13:04.376 "data_size": 63488 00:13:04.376 }, 00:13:04.376 { 00:13:04.376 "name": "BaseBdev2", 00:13:04.376 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:13:04.376 "is_configured": true, 00:13:04.376 "data_offset": 2048, 00:13:04.376 "data_size": 63488 00:13:04.376 }, 00:13:04.376 { 00:13:04.376 "name": "BaseBdev3", 00:13:04.376 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:04.376 "is_configured": true, 00:13:04.376 "data_offset": 2048, 00:13:04.376 "data_size": 63488 00:13:04.376 }, 00:13:04.376 { 00:13:04.376 "name": "BaseBdev4", 00:13:04.376 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:04.376 "is_configured": true, 00:13:04.376 "data_offset": 2048, 00:13:04.376 "data_size": 63488 00:13:04.376 } 00:13:04.376 ] 00:13:04.376 }' 00:13:04.376 18:53:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.376 18:53:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.636 "name": "raid_bdev1", 00:13:04.636 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:04.636 "strip_size_kb": 0, 00:13:04.636 "state": "online", 00:13:04.636 "raid_level": "raid1", 00:13:04.636 "superblock": true, 00:13:04.636 "num_base_bdevs": 4, 00:13:04.636 "num_base_bdevs_discovered": 3, 00:13:04.636 "num_base_bdevs_operational": 3, 00:13:04.636 "base_bdevs_list": [ 00:13:04.636 { 00:13:04.636 "name": null, 00:13:04.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.636 "is_configured": false, 00:13:04.636 "data_offset": 0, 00:13:04.636 "data_size": 63488 00:13:04.636 }, 00:13:04.636 { 00:13:04.636 "name": "BaseBdev2", 00:13:04.636 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:13:04.636 "is_configured": true, 00:13:04.636 "data_offset": 2048, 00:13:04.636 "data_size": 63488 00:13:04.636 }, 00:13:04.636 { 00:13:04.636 "name": "BaseBdev3", 00:13:04.636 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:04.636 "is_configured": true, 00:13:04.636 "data_offset": 2048, 00:13:04.636 "data_size": 63488 00:13:04.636 }, 00:13:04.636 { 00:13:04.636 "name": "BaseBdev4", 00:13:04.636 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:04.636 "is_configured": true, 00:13:04.636 "data_offset": 2048, 00:13:04.636 "data_size": 63488 00:13:04.636 } 00:13:04.636 ] 00:13:04.636 }' 00:13:04.636 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.896 [2024-11-28 18:53:34.289308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.896 [2024-11-28 18:53:34.293128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:13:04.896 [2024-11-28 18:53:34.294984] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.896 18:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.876 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.877 "name": "raid_bdev1", 00:13:05.877 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:05.877 "strip_size_kb": 0, 00:13:05.877 "state": "online", 00:13:05.877 "raid_level": "raid1", 00:13:05.877 "superblock": true, 00:13:05.877 "num_base_bdevs": 4, 00:13:05.877 "num_base_bdevs_discovered": 4, 00:13:05.877 "num_base_bdevs_operational": 4, 00:13:05.877 "process": { 00:13:05.877 "type": "rebuild", 00:13:05.877 "target": "spare", 00:13:05.877 "progress": { 00:13:05.877 "blocks": 20480, 00:13:05.877 "percent": 32 00:13:05.877 } 00:13:05.877 }, 00:13:05.877 "base_bdevs_list": [ 00:13:05.877 { 00:13:05.877 "name": "spare", 00:13:05.877 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 }, 00:13:05.877 { 00:13:05.877 "name": "BaseBdev2", 00:13:05.877 "uuid": "13544d44-0a7d-534b-b8d0-f1c8f3fe10bc", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 }, 00:13:05.877 { 00:13:05.877 "name": "BaseBdev3", 00:13:05.877 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 }, 00:13:05.877 { 00:13:05.877 "name": "BaseBdev4", 00:13:05.877 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 } 00:13:05.877 ] 00:13:05.877 }' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:05.877 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.877 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.877 [2024-11-28 18:53:35.459916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.136 [2024-11-28 18:53:35.601233] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.136 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.136 "name": "raid_bdev1", 00:13:06.136 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:06.136 "strip_size_kb": 0, 00:13:06.136 "state": "online", 00:13:06.136 "raid_level": "raid1", 00:13:06.136 "superblock": true, 00:13:06.136 "num_base_bdevs": 4, 00:13:06.136 "num_base_bdevs_discovered": 3, 00:13:06.136 "num_base_bdevs_operational": 3, 00:13:06.136 "process": { 00:13:06.136 "type": "rebuild", 00:13:06.136 "target": "spare", 00:13:06.136 "progress": { 00:13:06.136 "blocks": 24576, 00:13:06.136 "percent": 38 00:13:06.136 } 00:13:06.136 }, 00:13:06.136 "base_bdevs_list": [ 00:13:06.136 { 00:13:06.136 "name": "spare", 00:13:06.136 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:06.136 "is_configured": true, 00:13:06.136 "data_offset": 2048, 00:13:06.136 "data_size": 63488 00:13:06.136 }, 00:13:06.136 { 00:13:06.136 "name": null, 00:13:06.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.136 "is_configured": false, 00:13:06.137 "data_offset": 0, 00:13:06.137 "data_size": 63488 00:13:06.137 }, 00:13:06.137 { 00:13:06.137 "name": "BaseBdev3", 00:13:06.137 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:06.137 "is_configured": true, 00:13:06.137 "data_offset": 2048, 00:13:06.137 "data_size": 63488 00:13:06.137 }, 00:13:06.137 { 00:13:06.137 "name": "BaseBdev4", 00:13:06.137 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:06.137 "is_configured": true, 00:13:06.137 "data_offset": 2048, 00:13:06.137 "data_size": 63488 00:13:06.137 } 00:13:06.137 ] 00:13:06.137 }' 00:13:06.137 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.137 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.137 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.396 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.396 "name": "raid_bdev1", 00:13:06.396 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:06.396 "strip_size_kb": 0, 00:13:06.396 "state": "online", 00:13:06.396 "raid_level": "raid1", 00:13:06.396 "superblock": true, 00:13:06.396 "num_base_bdevs": 4, 00:13:06.396 "num_base_bdevs_discovered": 3, 00:13:06.396 "num_base_bdevs_operational": 3, 00:13:06.396 "process": { 00:13:06.396 "type": "rebuild", 00:13:06.396 "target": "spare", 00:13:06.396 "progress": { 00:13:06.396 "blocks": 26624, 00:13:06.396 "percent": 41 00:13:06.396 } 00:13:06.396 }, 00:13:06.396 "base_bdevs_list": [ 00:13:06.396 { 00:13:06.396 "name": "spare", 00:13:06.396 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:06.396 "is_configured": true, 00:13:06.396 "data_offset": 2048, 00:13:06.396 "data_size": 63488 00:13:06.396 }, 00:13:06.396 { 00:13:06.396 "name": null, 00:13:06.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.396 "is_configured": false, 00:13:06.396 "data_offset": 0, 00:13:06.396 "data_size": 63488 00:13:06.396 }, 00:13:06.396 { 00:13:06.396 "name": "BaseBdev3", 00:13:06.396 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:06.396 "is_configured": true, 00:13:06.396 "data_offset": 2048, 00:13:06.396 "data_size": 63488 00:13:06.396 }, 00:13:06.396 { 00:13:06.396 "name": "BaseBdev4", 00:13:06.396 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:06.396 "is_configured": true, 00:13:06.397 "data_offset": 2048, 00:13:06.397 "data_size": 63488 00:13:06.397 } 00:13:06.397 ] 00:13:06.397 }' 00:13:06.397 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.397 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.397 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.397 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.397 18:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.335 18:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.594 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.594 "name": "raid_bdev1", 00:13:07.594 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:07.594 "strip_size_kb": 0, 00:13:07.594 "state": "online", 00:13:07.594 "raid_level": "raid1", 00:13:07.594 "superblock": true, 00:13:07.594 "num_base_bdevs": 4, 00:13:07.594 "num_base_bdevs_discovered": 3, 00:13:07.594 "num_base_bdevs_operational": 3, 00:13:07.594 "process": { 00:13:07.594 "type": "rebuild", 00:13:07.594 "target": "spare", 00:13:07.594 "progress": { 00:13:07.594 "blocks": 49152, 00:13:07.594 "percent": 77 00:13:07.594 } 00:13:07.594 }, 00:13:07.594 "base_bdevs_list": [ 00:13:07.594 { 00:13:07.594 "name": "spare", 00:13:07.594 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:07.594 "is_configured": true, 00:13:07.594 "data_offset": 2048, 00:13:07.594 "data_size": 63488 00:13:07.594 }, 00:13:07.594 { 00:13:07.594 "name": null, 00:13:07.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.594 "is_configured": false, 00:13:07.594 "data_offset": 0, 00:13:07.594 "data_size": 63488 00:13:07.594 }, 00:13:07.594 { 00:13:07.594 "name": "BaseBdev3", 00:13:07.594 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:07.594 "is_configured": true, 00:13:07.594 "data_offset": 2048, 00:13:07.594 "data_size": 63488 00:13:07.595 }, 00:13:07.595 { 00:13:07.595 "name": "BaseBdev4", 00:13:07.595 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:07.595 "is_configured": true, 00:13:07.595 "data_offset": 2048, 00:13:07.595 "data_size": 63488 00:13:07.595 } 00:13:07.595 ] 00:13:07.595 }' 00:13:07.595 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.595 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.595 18:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.595 18:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.595 18:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.163 [2024-11-28 18:53:37.511074] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.163 [2024-11-28 18:53:37.511216] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.163 [2024-11-28 18:53:37.511359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.733 "name": "raid_bdev1", 00:13:08.733 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:08.733 "strip_size_kb": 0, 00:13:08.733 "state": "online", 00:13:08.733 "raid_level": "raid1", 00:13:08.733 "superblock": true, 00:13:08.733 "num_base_bdevs": 4, 00:13:08.733 "num_base_bdevs_discovered": 3, 00:13:08.733 "num_base_bdevs_operational": 3, 00:13:08.733 "base_bdevs_list": [ 00:13:08.733 { 00:13:08.733 "name": "spare", 00:13:08.733 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": null, 00:13:08.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.733 "is_configured": false, 00:13:08.733 "data_offset": 0, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": "BaseBdev3", 00:13:08.733 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": "BaseBdev4", 00:13:08.733 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 } 00:13:08.733 ] 00:13:08.733 }' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.733 "name": "raid_bdev1", 00:13:08.733 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:08.733 "strip_size_kb": 0, 00:13:08.733 "state": "online", 00:13:08.733 "raid_level": "raid1", 00:13:08.733 "superblock": true, 00:13:08.733 "num_base_bdevs": 4, 00:13:08.733 "num_base_bdevs_discovered": 3, 00:13:08.733 "num_base_bdevs_operational": 3, 00:13:08.733 "base_bdevs_list": [ 00:13:08.733 { 00:13:08.733 "name": "spare", 00:13:08.733 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": null, 00:13:08.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.733 "is_configured": false, 00:13:08.733 "data_offset": 0, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": "BaseBdev3", 00:13:08.733 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 }, 00:13:08.733 { 00:13:08.733 "name": "BaseBdev4", 00:13:08.733 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:08.733 "is_configured": true, 00:13:08.733 "data_offset": 2048, 00:13:08.733 "data_size": 63488 00:13:08.733 } 00:13:08.733 ] 00:13:08.733 }' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.733 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.993 "name": "raid_bdev1", 00:13:08.993 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:08.993 "strip_size_kb": 0, 00:13:08.993 "state": "online", 00:13:08.993 "raid_level": "raid1", 00:13:08.993 "superblock": true, 00:13:08.993 "num_base_bdevs": 4, 00:13:08.993 "num_base_bdevs_discovered": 3, 00:13:08.993 "num_base_bdevs_operational": 3, 00:13:08.993 "base_bdevs_list": [ 00:13:08.993 { 00:13:08.993 "name": "spare", 00:13:08.993 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:08.993 "is_configured": true, 00:13:08.993 "data_offset": 2048, 00:13:08.993 "data_size": 63488 00:13:08.993 }, 00:13:08.993 { 00:13:08.993 "name": null, 00:13:08.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.993 "is_configured": false, 00:13:08.993 "data_offset": 0, 00:13:08.993 "data_size": 63488 00:13:08.993 }, 00:13:08.993 { 00:13:08.993 "name": "BaseBdev3", 00:13:08.993 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:08.993 "is_configured": true, 00:13:08.993 "data_offset": 2048, 00:13:08.993 "data_size": 63488 00:13:08.993 }, 00:13:08.993 { 00:13:08.993 "name": "BaseBdev4", 00:13:08.993 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:08.993 "is_configured": true, 00:13:08.993 "data_offset": 2048, 00:13:08.993 "data_size": 63488 00:13:08.993 } 00:13:08.993 ] 00:13:08.993 }' 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.993 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 [2024-11-28 18:53:38.759833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.253 [2024-11-28 18:53:38.759912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.253 [2024-11-28 18:53:38.760009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.253 [2024-11-28 18:53:38.760088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.253 [2024-11-28 18:53:38.760098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.253 18:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:09.513 /dev/nbd0 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.513 1+0 records in 00:13:09.513 1+0 records out 00:13:09.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445693 s, 9.2 MB/s 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.513 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:09.773 /dev/nbd1 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.773 1+0 records in 00:13:09.773 1+0 records out 00:13:09.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419679 s, 9.8 MB/s 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.773 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.033 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.293 [2024-11-28 18:53:39.825865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.293 [2024-11-28 18:53:39.825923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.293 [2024-11-28 18:53:39.825963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:10.293 [2024-11-28 18:53:39.825971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.293 [2024-11-28 18:53:39.828106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.293 [2024-11-28 18:53:39.828144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.293 [2024-11-28 18:53:39.828225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.293 [2024-11-28 18:53:39.828273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.293 [2024-11-28 18:53:39.828385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.293 [2024-11-28 18:53:39.828502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:10.293 spare 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.293 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 [2024-11-28 18:53:39.928575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.551 [2024-11-28 18:53:39.928603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.551 [2024-11-28 18:53:39.928879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:13:10.551 [2024-11-28 18:53:39.929031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.551 [2024-11-28 18:53:39.929050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.551 [2024-11-28 18:53:39.929173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.551 "name": "raid_bdev1", 00:13:10.551 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:10.551 "strip_size_kb": 0, 00:13:10.551 "state": "online", 00:13:10.551 "raid_level": "raid1", 00:13:10.551 "superblock": true, 00:13:10.551 "num_base_bdevs": 4, 00:13:10.551 "num_base_bdevs_discovered": 3, 00:13:10.551 "num_base_bdevs_operational": 3, 00:13:10.551 "base_bdevs_list": [ 00:13:10.551 { 00:13:10.551 "name": "spare", 00:13:10.551 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:10.551 "is_configured": true, 00:13:10.551 "data_offset": 2048, 00:13:10.551 "data_size": 63488 00:13:10.551 }, 00:13:10.551 { 00:13:10.551 "name": null, 00:13:10.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.551 "is_configured": false, 00:13:10.551 "data_offset": 2048, 00:13:10.551 "data_size": 63488 00:13:10.551 }, 00:13:10.551 { 00:13:10.551 "name": "BaseBdev3", 00:13:10.551 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:10.551 "is_configured": true, 00:13:10.551 "data_offset": 2048, 00:13:10.551 "data_size": 63488 00:13:10.551 }, 00:13:10.551 { 00:13:10.551 "name": "BaseBdev4", 00:13:10.551 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:10.551 "is_configured": true, 00:13:10.551 "data_offset": 2048, 00:13:10.551 "data_size": 63488 00:13:10.551 } 00:13:10.551 ] 00:13:10.551 }' 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.551 18:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.810 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.070 "name": "raid_bdev1", 00:13:11.070 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:11.070 "strip_size_kb": 0, 00:13:11.070 "state": "online", 00:13:11.070 "raid_level": "raid1", 00:13:11.070 "superblock": true, 00:13:11.070 "num_base_bdevs": 4, 00:13:11.070 "num_base_bdevs_discovered": 3, 00:13:11.070 "num_base_bdevs_operational": 3, 00:13:11.070 "base_bdevs_list": [ 00:13:11.070 { 00:13:11.070 "name": "spare", 00:13:11.070 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:11.070 "is_configured": true, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": null, 00:13:11.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.070 "is_configured": false, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": "BaseBdev3", 00:13:11.070 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:11.070 "is_configured": true, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": "BaseBdev4", 00:13:11.070 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:11.070 "is_configured": true, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 } 00:13:11.070 ] 00:13:11.070 }' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.070 [2024-11-28 18:53:40.586122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.070 "name": "raid_bdev1", 00:13:11.070 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:11.070 "strip_size_kb": 0, 00:13:11.070 "state": "online", 00:13:11.070 "raid_level": "raid1", 00:13:11.070 "superblock": true, 00:13:11.070 "num_base_bdevs": 4, 00:13:11.070 "num_base_bdevs_discovered": 2, 00:13:11.070 "num_base_bdevs_operational": 2, 00:13:11.070 "base_bdevs_list": [ 00:13:11.070 { 00:13:11.070 "name": null, 00:13:11.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.070 "is_configured": false, 00:13:11.070 "data_offset": 0, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": null, 00:13:11.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.070 "is_configured": false, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": "BaseBdev3", 00:13:11.070 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:11.070 "is_configured": true, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 }, 00:13:11.070 { 00:13:11.070 "name": "BaseBdev4", 00:13:11.070 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:11.070 "is_configured": true, 00:13:11.070 "data_offset": 2048, 00:13:11.070 "data_size": 63488 00:13:11.070 } 00:13:11.070 ] 00:13:11.070 }' 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.070 18:53:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.640 18:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.640 18:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.640 18:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.640 [2024-11-28 18:53:41.086370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.640 [2024-11-28 18:53:41.086826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:11.640 [2024-11-28 18:53:41.086850] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.640 [2024-11-28 18:53:41.086904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.640 [2024-11-28 18:53:41.091027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:13:11.640 18:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.640 18:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:11.640 [2024-11-28 18:53:41.092958] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.578 "name": "raid_bdev1", 00:13:12.578 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:12.578 "strip_size_kb": 0, 00:13:12.578 "state": "online", 00:13:12.578 "raid_level": "raid1", 00:13:12.578 "superblock": true, 00:13:12.578 "num_base_bdevs": 4, 00:13:12.578 "num_base_bdevs_discovered": 3, 00:13:12.578 "num_base_bdevs_operational": 3, 00:13:12.578 "process": { 00:13:12.578 "type": "rebuild", 00:13:12.578 "target": "spare", 00:13:12.578 "progress": { 00:13:12.578 "blocks": 20480, 00:13:12.578 "percent": 32 00:13:12.578 } 00:13:12.578 }, 00:13:12.578 "base_bdevs_list": [ 00:13:12.578 { 00:13:12.578 "name": "spare", 00:13:12.578 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:12.578 "is_configured": true, 00:13:12.578 "data_offset": 2048, 00:13:12.578 "data_size": 63488 00:13:12.578 }, 00:13:12.578 { 00:13:12.578 "name": null, 00:13:12.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.578 "is_configured": false, 00:13:12.578 "data_offset": 2048, 00:13:12.578 "data_size": 63488 00:13:12.578 }, 00:13:12.578 { 00:13:12.578 "name": "BaseBdev3", 00:13:12.578 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:12.578 "is_configured": true, 00:13:12.578 "data_offset": 2048, 00:13:12.578 "data_size": 63488 00:13:12.578 }, 00:13:12.578 { 00:13:12.578 "name": "BaseBdev4", 00:13:12.578 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:12.578 "is_configured": true, 00:13:12.578 "data_offset": 2048, 00:13:12.578 "data_size": 63488 00:13:12.578 } 00:13:12.578 ] 00:13:12.578 }' 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.578 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 [2024-11-28 18:53:42.232028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.837 [2024-11-28 18:53:42.299029] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.837 [2024-11-28 18:53:42.299087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.837 [2024-11-28 18:53:42.299124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.837 [2024-11-28 18:53:42.299131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.837 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.838 "name": "raid_bdev1", 00:13:12.838 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:12.838 "strip_size_kb": 0, 00:13:12.838 "state": "online", 00:13:12.838 "raid_level": "raid1", 00:13:12.838 "superblock": true, 00:13:12.838 "num_base_bdevs": 4, 00:13:12.838 "num_base_bdevs_discovered": 2, 00:13:12.838 "num_base_bdevs_operational": 2, 00:13:12.838 "base_bdevs_list": [ 00:13:12.838 { 00:13:12.838 "name": null, 00:13:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.838 "is_configured": false, 00:13:12.838 "data_offset": 0, 00:13:12.838 "data_size": 63488 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": null, 00:13:12.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.838 "is_configured": false, 00:13:12.838 "data_offset": 2048, 00:13:12.838 "data_size": 63488 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": "BaseBdev3", 00:13:12.838 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:12.838 "is_configured": true, 00:13:12.838 "data_offset": 2048, 00:13:12.838 "data_size": 63488 00:13:12.838 }, 00:13:12.838 { 00:13:12.838 "name": "BaseBdev4", 00:13:12.838 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:12.838 "is_configured": true, 00:13:12.838 "data_offset": 2048, 00:13:12.838 "data_size": 63488 00:13:12.838 } 00:13:12.838 ] 00:13:12.838 }' 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.838 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.408 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.408 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.408 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.408 [2024-11-28 18:53:42.783313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.408 [2024-11-28 18:53:42.783366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.408 [2024-11-28 18:53:42.783388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:13.408 [2024-11-28 18:53:42.783398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.408 [2024-11-28 18:53:42.783907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.408 [2024-11-28 18:53:42.783934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.408 [2024-11-28 18:53:42.784039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:13.408 [2024-11-28 18:53:42.784053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:13.408 [2024-11-28 18:53:42.784062] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.408 [2024-11-28 18:53:42.784105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.408 [2024-11-28 18:53:42.787682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:13:13.408 spare 00:13:13.408 18:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.408 18:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:13.408 [2024-11-28 18:53:42.789639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.348 "name": "raid_bdev1", 00:13:14.348 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:14.348 "strip_size_kb": 0, 00:13:14.348 "state": "online", 00:13:14.348 "raid_level": "raid1", 00:13:14.348 "superblock": true, 00:13:14.348 "num_base_bdevs": 4, 00:13:14.348 "num_base_bdevs_discovered": 3, 00:13:14.348 "num_base_bdevs_operational": 3, 00:13:14.348 "process": { 00:13:14.348 "type": "rebuild", 00:13:14.348 "target": "spare", 00:13:14.348 "progress": { 00:13:14.348 "blocks": 20480, 00:13:14.348 "percent": 32 00:13:14.348 } 00:13:14.348 }, 00:13:14.348 "base_bdevs_list": [ 00:13:14.348 { 00:13:14.348 "name": "spare", 00:13:14.348 "uuid": "f70d46a3-5967-567c-b6ab-fd895c16307f", 00:13:14.348 "is_configured": true, 00:13:14.348 "data_offset": 2048, 00:13:14.348 "data_size": 63488 00:13:14.348 }, 00:13:14.348 { 00:13:14.348 "name": null, 00:13:14.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.348 "is_configured": false, 00:13:14.348 "data_offset": 2048, 00:13:14.348 "data_size": 63488 00:13:14.348 }, 00:13:14.348 { 00:13:14.348 "name": "BaseBdev3", 00:13:14.348 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:14.348 "is_configured": true, 00:13:14.348 "data_offset": 2048, 00:13:14.348 "data_size": 63488 00:13:14.348 }, 00:13:14.348 { 00:13:14.348 "name": "BaseBdev4", 00:13:14.348 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:14.348 "is_configured": true, 00:13:14.348 "data_offset": 2048, 00:13:14.348 "data_size": 63488 00:13:14.348 } 00:13:14.348 ] 00:13:14.348 }' 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.348 18:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.348 [2024-11-28 18:53:43.949503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.608 [2024-11-28 18:53:43.995645] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.608 [2024-11-28 18:53:43.995702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.608 [2024-11-28 18:53:43.995733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.608 [2024-11-28 18:53:43.995742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.608 "name": "raid_bdev1", 00:13:14.608 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:14.608 "strip_size_kb": 0, 00:13:14.608 "state": "online", 00:13:14.608 "raid_level": "raid1", 00:13:14.608 "superblock": true, 00:13:14.608 "num_base_bdevs": 4, 00:13:14.608 "num_base_bdevs_discovered": 2, 00:13:14.608 "num_base_bdevs_operational": 2, 00:13:14.608 "base_bdevs_list": [ 00:13:14.608 { 00:13:14.608 "name": null, 00:13:14.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.608 "is_configured": false, 00:13:14.608 "data_offset": 0, 00:13:14.608 "data_size": 63488 00:13:14.608 }, 00:13:14.608 { 00:13:14.608 "name": null, 00:13:14.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.608 "is_configured": false, 00:13:14.608 "data_offset": 2048, 00:13:14.608 "data_size": 63488 00:13:14.608 }, 00:13:14.608 { 00:13:14.608 "name": "BaseBdev3", 00:13:14.608 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:14.608 "is_configured": true, 00:13:14.608 "data_offset": 2048, 00:13:14.608 "data_size": 63488 00:13:14.608 }, 00:13:14.608 { 00:13:14.608 "name": "BaseBdev4", 00:13:14.608 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:14.608 "is_configured": true, 00:13:14.608 "data_offset": 2048, 00:13:14.608 "data_size": 63488 00:13:14.608 } 00:13:14.608 ] 00:13:14.608 }' 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.608 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.177 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.177 "name": "raid_bdev1", 00:13:15.177 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:15.177 "strip_size_kb": 0, 00:13:15.177 "state": "online", 00:13:15.178 "raid_level": "raid1", 00:13:15.178 "superblock": true, 00:13:15.178 "num_base_bdevs": 4, 00:13:15.178 "num_base_bdevs_discovered": 2, 00:13:15.178 "num_base_bdevs_operational": 2, 00:13:15.178 "base_bdevs_list": [ 00:13:15.178 { 00:13:15.178 "name": null, 00:13:15.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.178 "is_configured": false, 00:13:15.178 "data_offset": 0, 00:13:15.178 "data_size": 63488 00:13:15.178 }, 00:13:15.178 { 00:13:15.178 "name": null, 00:13:15.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.178 "is_configured": false, 00:13:15.178 "data_offset": 2048, 00:13:15.178 "data_size": 63488 00:13:15.178 }, 00:13:15.178 { 00:13:15.178 "name": "BaseBdev3", 00:13:15.178 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:15.178 "is_configured": true, 00:13:15.178 "data_offset": 2048, 00:13:15.178 "data_size": 63488 00:13:15.178 }, 00:13:15.178 { 00:13:15.178 "name": "BaseBdev4", 00:13:15.178 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:15.178 "is_configured": true, 00:13:15.178 "data_offset": 2048, 00:13:15.178 "data_size": 63488 00:13:15.178 } 00:13:15.178 ] 00:13:15.178 }' 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.178 [2024-11-28 18:53:44.619858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.178 [2024-11-28 18:53:44.619909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.178 [2024-11-28 18:53:44.619927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:15.178 [2024-11-28 18:53:44.619937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.178 [2024-11-28 18:53:44.620344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.178 [2024-11-28 18:53:44.620376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.178 [2024-11-28 18:53:44.620454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.178 [2024-11-28 18:53:44.620470] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:15.178 [2024-11-28 18:53:44.620491] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.178 [2024-11-28 18:53:44.620506] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.178 BaseBdev1 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.178 18:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.115 "name": "raid_bdev1", 00:13:16.115 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:16.115 "strip_size_kb": 0, 00:13:16.115 "state": "online", 00:13:16.115 "raid_level": "raid1", 00:13:16.115 "superblock": true, 00:13:16.115 "num_base_bdevs": 4, 00:13:16.115 "num_base_bdevs_discovered": 2, 00:13:16.115 "num_base_bdevs_operational": 2, 00:13:16.115 "base_bdevs_list": [ 00:13:16.115 { 00:13:16.115 "name": null, 00:13:16.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.115 "is_configured": false, 00:13:16.115 "data_offset": 0, 00:13:16.115 "data_size": 63488 00:13:16.115 }, 00:13:16.115 { 00:13:16.115 "name": null, 00:13:16.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.115 "is_configured": false, 00:13:16.115 "data_offset": 2048, 00:13:16.115 "data_size": 63488 00:13:16.115 }, 00:13:16.115 { 00:13:16.115 "name": "BaseBdev3", 00:13:16.115 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:16.115 "is_configured": true, 00:13:16.115 "data_offset": 2048, 00:13:16.115 "data_size": 63488 00:13:16.115 }, 00:13:16.115 { 00:13:16.115 "name": "BaseBdev4", 00:13:16.115 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:16.115 "is_configured": true, 00:13:16.115 "data_offset": 2048, 00:13:16.115 "data_size": 63488 00:13:16.115 } 00:13:16.115 ] 00:13:16.115 }' 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.115 18:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.687 "name": "raid_bdev1", 00:13:16.687 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:16.687 "strip_size_kb": 0, 00:13:16.687 "state": "online", 00:13:16.687 "raid_level": "raid1", 00:13:16.687 "superblock": true, 00:13:16.687 "num_base_bdevs": 4, 00:13:16.687 "num_base_bdevs_discovered": 2, 00:13:16.687 "num_base_bdevs_operational": 2, 00:13:16.687 "base_bdevs_list": [ 00:13:16.687 { 00:13:16.687 "name": null, 00:13:16.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.687 "is_configured": false, 00:13:16.687 "data_offset": 0, 00:13:16.687 "data_size": 63488 00:13:16.687 }, 00:13:16.687 { 00:13:16.687 "name": null, 00:13:16.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.687 "is_configured": false, 00:13:16.687 "data_offset": 2048, 00:13:16.687 "data_size": 63488 00:13:16.687 }, 00:13:16.687 { 00:13:16.687 "name": "BaseBdev3", 00:13:16.687 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:16.687 "is_configured": true, 00:13:16.687 "data_offset": 2048, 00:13:16.687 "data_size": 63488 00:13:16.687 }, 00:13:16.687 { 00:13:16.687 "name": "BaseBdev4", 00:13:16.687 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:16.687 "is_configured": true, 00:13:16.687 "data_offset": 2048, 00:13:16.687 "data_size": 63488 00:13:16.687 } 00:13:16.687 ] 00:13:16.687 }' 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.687 [2024-11-28 18:53:46.176310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.687 [2024-11-28 18:53:46.176459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:16.687 [2024-11-28 18:53:46.176473] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.687 request: 00:13:16.687 { 00:13:16.687 "base_bdev": "BaseBdev1", 00:13:16.687 "raid_bdev": "raid_bdev1", 00:13:16.687 "method": "bdev_raid_add_base_bdev", 00:13:16.687 "req_id": 1 00:13:16.687 } 00:13:16.687 Got JSON-RPC error response 00:13:16.687 response: 00:13:16.687 { 00:13:16.687 "code": -22, 00:13:16.687 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:16.687 } 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.687 18:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.626 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.886 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.886 "name": "raid_bdev1", 00:13:17.886 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:17.886 "strip_size_kb": 0, 00:13:17.886 "state": "online", 00:13:17.886 "raid_level": "raid1", 00:13:17.886 "superblock": true, 00:13:17.886 "num_base_bdevs": 4, 00:13:17.886 "num_base_bdevs_discovered": 2, 00:13:17.886 "num_base_bdevs_operational": 2, 00:13:17.886 "base_bdevs_list": [ 00:13:17.886 { 00:13:17.886 "name": null, 00:13:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.886 "is_configured": false, 00:13:17.886 "data_offset": 0, 00:13:17.886 "data_size": 63488 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": null, 00:13:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.886 "is_configured": false, 00:13:17.886 "data_offset": 2048, 00:13:17.886 "data_size": 63488 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": "BaseBdev3", 00:13:17.886 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:17.886 "is_configured": true, 00:13:17.886 "data_offset": 2048, 00:13:17.886 "data_size": 63488 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": "BaseBdev4", 00:13:17.886 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:17.886 "is_configured": true, 00:13:17.886 "data_offset": 2048, 00:13:17.886 "data_size": 63488 00:13:17.886 } 00:13:17.886 ] 00:13:17.886 }' 00:13:17.887 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.887 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.146 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.146 "name": "raid_bdev1", 00:13:18.146 "uuid": "ee55d3e8-0dec-4fd1-be60-58271c3ea988", 00:13:18.146 "strip_size_kb": 0, 00:13:18.146 "state": "online", 00:13:18.146 "raid_level": "raid1", 00:13:18.146 "superblock": true, 00:13:18.146 "num_base_bdevs": 4, 00:13:18.146 "num_base_bdevs_discovered": 2, 00:13:18.146 "num_base_bdevs_operational": 2, 00:13:18.147 "base_bdevs_list": [ 00:13:18.147 { 00:13:18.147 "name": null, 00:13:18.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.147 "is_configured": false, 00:13:18.147 "data_offset": 0, 00:13:18.147 "data_size": 63488 00:13:18.147 }, 00:13:18.147 { 00:13:18.147 "name": null, 00:13:18.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.147 "is_configured": false, 00:13:18.147 "data_offset": 2048, 00:13:18.147 "data_size": 63488 00:13:18.147 }, 00:13:18.147 { 00:13:18.147 "name": "BaseBdev3", 00:13:18.147 "uuid": "a33c34de-c9b2-505c-bcba-9c17d087ae50", 00:13:18.147 "is_configured": true, 00:13:18.147 "data_offset": 2048, 00:13:18.147 "data_size": 63488 00:13:18.147 }, 00:13:18.147 { 00:13:18.147 "name": "BaseBdev4", 00:13:18.147 "uuid": "29c376f5-463f-5a3f-b2c4-39a45f76eca6", 00:13:18.147 "is_configured": true, 00:13:18.147 "data_offset": 2048, 00:13:18.147 "data_size": 63488 00:13:18.147 } 00:13:18.147 ] 00:13:18.147 }' 00:13:18.147 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90089 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90089 ']' 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90089 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90089 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.407 killing process with pid 90089 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90089' 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90089 00:13:18.407 Received shutdown signal, test time was about 60.000000 seconds 00:13:18.407 00:13:18.407 Latency(us) 00:13:18.407 [2024-11-28T18:53:48.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.407 [2024-11-28T18:53:48.013Z] =================================================================================================================== 00:13:18.407 [2024-11-28T18:53:48.013Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:18.407 [2024-11-28 18:53:47.853569] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.407 [2024-11-28 18:53:47.853678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.407 [2024-11-28 18:53:47.853742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.407 18:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90089 00:13:18.407 [2024-11-28 18:53:47.853752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:18.407 [2024-11-28 18:53:47.904668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.668 00:13:18.668 real 0m23.862s 00:13:18.668 user 0m28.814s 00:13:18.668 sys 0m4.031s 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.668 ************************************ 00:13:18.668 END TEST raid_rebuild_test_sb 00:13:18.668 ************************************ 00:13:18.668 18:53:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:18.668 18:53:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:18.668 18:53:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.668 18:53:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.668 ************************************ 00:13:18.668 START TEST raid_rebuild_test_io 00:13:18.668 ************************************ 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90836 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90836 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 90836 ']' 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.668 18:53:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.928 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.929 Zero copy mechanism will not be used. 00:13:18.929 [2024-11-28 18:53:48.311973] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:18.929 [2024-11-28 18:53:48.312110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90836 ] 00:13:18.929 [2024-11-28 18:53:48.452969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:18.929 [2024-11-28 18:53:48.488897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.929 [2024-11-28 18:53:48.514870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.188 [2024-11-28 18:53:48.559006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.188 [2024-11-28 18:53:48.559046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 BaseBdev1_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 [2024-11-28 18:53:49.152513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.759 [2024-11-28 18:53:49.152599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.759 [2024-11-28 18:53:49.152625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.759 [2024-11-28 18:53:49.152639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.759 [2024-11-28 18:53:49.154747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.759 [2024-11-28 18:53:49.154782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.759 BaseBdev1 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 BaseBdev2_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 [2024-11-28 18:53:49.181403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.759 [2024-11-28 18:53:49.181467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.759 [2024-11-28 18:53:49.181501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.759 [2024-11-28 18:53:49.181511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.759 [2024-11-28 18:53:49.183555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.759 [2024-11-28 18:53:49.183591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.759 BaseBdev2 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 BaseBdev3_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 [2024-11-28 18:53:49.210321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:19.759 [2024-11-28 18:53:49.210371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.759 [2024-11-28 18:53:49.210405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.759 [2024-11-28 18:53:49.210416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.759 [2024-11-28 18:53:49.212459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.759 [2024-11-28 18:53:49.212496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.759 BaseBdev3 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 BaseBdev4_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 [2024-11-28 18:53:49.259284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:19.759 [2024-11-28 18:53:49.259403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.759 [2024-11-28 18:53:49.259477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:19.759 [2024-11-28 18:53:49.259504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.759 [2024-11-28 18:53:49.263149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.759 [2024-11-28 18:53:49.263204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:19.759 BaseBdev4 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 spare_malloc 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 spare_delay 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.759 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 [2024-11-28 18:53:49.301344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.760 [2024-11-28 18:53:49.301391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.760 [2024-11-28 18:53:49.301423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:19.760 [2024-11-28 18:53:49.301434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.760 [2024-11-28 18:53:49.303470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.760 [2024-11-28 18:53:49.303507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.760 spare 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 [2024-11-28 18:53:49.313407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.760 [2024-11-28 18:53:49.315232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.760 [2024-11-28 18:53:49.315296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.760 [2024-11-28 18:53:49.315337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.760 [2024-11-28 18:53:49.315405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:19.760 [2024-11-28 18:53:49.315423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:19.760 [2024-11-28 18:53:49.315684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:19.760 [2024-11-28 18:53:49.315844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:19.760 [2024-11-28 18:53:49.315862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:19.760 [2024-11-28 18:53:49.315986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.020 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.020 "name": "raid_bdev1", 00:13:20.020 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:20.020 "strip_size_kb": 0, 00:13:20.020 "state": "online", 00:13:20.020 "raid_level": "raid1", 00:13:20.020 "superblock": false, 00:13:20.020 "num_base_bdevs": 4, 00:13:20.020 "num_base_bdevs_discovered": 4, 00:13:20.020 "num_base_bdevs_operational": 4, 00:13:20.020 "base_bdevs_list": [ 00:13:20.020 { 00:13:20.020 "name": "BaseBdev1", 00:13:20.020 "uuid": "19d65637-5b5d-5397-b2ed-25879a069fb6", 00:13:20.020 "is_configured": true, 00:13:20.020 "data_offset": 0, 00:13:20.020 "data_size": 65536 00:13:20.020 }, 00:13:20.020 { 00:13:20.020 "name": "BaseBdev2", 00:13:20.020 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:20.020 "is_configured": true, 00:13:20.020 "data_offset": 0, 00:13:20.020 "data_size": 65536 00:13:20.020 }, 00:13:20.020 { 00:13:20.020 "name": "BaseBdev3", 00:13:20.020 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:20.020 "is_configured": true, 00:13:20.020 "data_offset": 0, 00:13:20.020 "data_size": 65536 00:13:20.020 }, 00:13:20.020 { 00:13:20.020 "name": "BaseBdev4", 00:13:20.020 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:20.020 "is_configured": true, 00:13:20.020 "data_offset": 0, 00:13:20.020 "data_size": 65536 00:13:20.020 } 00:13:20.020 ] 00:13:20.020 }' 00:13:20.020 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.020 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.280 [2024-11-28 18:53:49.773781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.280 [2024-11-28 18:53:49.857532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.280 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.540 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.540 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.540 "name": "raid_bdev1", 00:13:20.540 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:20.540 "strip_size_kb": 0, 00:13:20.540 "state": "online", 00:13:20.540 "raid_level": "raid1", 00:13:20.540 "superblock": false, 00:13:20.540 "num_base_bdevs": 4, 00:13:20.540 "num_base_bdevs_discovered": 3, 00:13:20.540 "num_base_bdevs_operational": 3, 00:13:20.540 "base_bdevs_list": [ 00:13:20.540 { 00:13:20.540 "name": null, 00:13:20.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.540 "is_configured": false, 00:13:20.540 "data_offset": 0, 00:13:20.540 "data_size": 65536 00:13:20.540 }, 00:13:20.540 { 00:13:20.540 "name": "BaseBdev2", 00:13:20.540 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:20.540 "is_configured": true, 00:13:20.540 "data_offset": 0, 00:13:20.540 "data_size": 65536 00:13:20.540 }, 00:13:20.540 { 00:13:20.540 "name": "BaseBdev3", 00:13:20.540 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:20.540 "is_configured": true, 00:13:20.540 "data_offset": 0, 00:13:20.540 "data_size": 65536 00:13:20.540 }, 00:13:20.540 { 00:13:20.540 "name": "BaseBdev4", 00:13:20.540 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:20.540 "is_configured": true, 00:13:20.540 "data_offset": 0, 00:13:20.540 "data_size": 65536 00:13:20.540 } 00:13:20.540 ] 00:13:20.540 }' 00:13:20.540 18:53:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.540 18:53:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.540 [2024-11-28 18:53:49.947484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:20.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:20.540 Zero copy mechanism will not be used. 00:13:20.540 Running I/O for 60 seconds... 00:13:20.800 18:53:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.800 18:53:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.800 18:53:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.800 [2024-11-28 18:53:50.335237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.800 18:53:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.800 18:53:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:20.800 [2024-11-28 18:53:50.389102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:20.800 [2024-11-28 18:53:50.391132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.060 [2024-11-28 18:53:50.504792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.060 [2024-11-28 18:53:50.505956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.320 [2024-11-28 18:53:50.719663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.320 [2024-11-28 18:53:50.719897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.579 154.00 IOPS, 462.00 MiB/s [2024-11-28T18:53:51.185Z] [2024-11-28 18:53:51.047061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.839 [2024-11-28 18:53:51.394507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.839 "name": "raid_bdev1", 00:13:21.839 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:21.839 "strip_size_kb": 0, 00:13:21.839 "state": "online", 00:13:21.839 "raid_level": "raid1", 00:13:21.839 "superblock": false, 00:13:21.839 "num_base_bdevs": 4, 00:13:21.839 "num_base_bdevs_discovered": 4, 00:13:21.839 "num_base_bdevs_operational": 4, 00:13:21.839 "process": { 00:13:21.839 "type": "rebuild", 00:13:21.839 "target": "spare", 00:13:21.839 "progress": { 00:13:21.839 "blocks": 12288, 00:13:21.839 "percent": 18 00:13:21.839 } 00:13:21.839 }, 00:13:21.839 "base_bdevs_list": [ 00:13:21.839 { 00:13:21.839 "name": "spare", 00:13:21.839 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:21.839 "is_configured": true, 00:13:21.839 "data_offset": 0, 00:13:21.839 "data_size": 65536 00:13:21.839 }, 00:13:21.839 { 00:13:21.839 "name": "BaseBdev2", 00:13:21.839 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:21.839 "is_configured": true, 00:13:21.839 "data_offset": 0, 00:13:21.839 "data_size": 65536 00:13:21.839 }, 00:13:21.839 { 00:13:21.839 "name": "BaseBdev3", 00:13:21.839 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:21.839 "is_configured": true, 00:13:21.839 "data_offset": 0, 00:13:21.839 "data_size": 65536 00:13:21.839 }, 00:13:21.839 { 00:13:21.839 "name": "BaseBdev4", 00:13:21.839 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:21.839 "is_configured": true, 00:13:21.839 "data_offset": 0, 00:13:21.839 "data_size": 65536 00:13:21.839 } 00:13:21.839 ] 00:13:21.839 }' 00:13:21.839 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.098 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.098 [2024-11-28 18:53:51.533965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.098 [2024-11-28 18:53:51.602379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:22.098 [2024-11-28 18:53:51.602578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:22.358 [2024-11-28 18:53:51.710129] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.359 [2024-11-28 18:53:51.714321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.359 [2024-11-28 18:53:51.714368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.359 [2024-11-28 18:53:51.714399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.359 [2024-11-28 18:53:51.737633] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.359 "name": "raid_bdev1", 00:13:22.359 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:22.359 "strip_size_kb": 0, 00:13:22.359 "state": "online", 00:13:22.359 "raid_level": "raid1", 00:13:22.359 "superblock": false, 00:13:22.359 "num_base_bdevs": 4, 00:13:22.359 "num_base_bdevs_discovered": 3, 00:13:22.359 "num_base_bdevs_operational": 3, 00:13:22.359 "base_bdevs_list": [ 00:13:22.359 { 00:13:22.359 "name": null, 00:13:22.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.359 "is_configured": false, 00:13:22.359 "data_offset": 0, 00:13:22.359 "data_size": 65536 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": "BaseBdev2", 00:13:22.359 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 0, 00:13:22.359 "data_size": 65536 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": "BaseBdev3", 00:13:22.359 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 0, 00:13:22.359 "data_size": 65536 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": "BaseBdev4", 00:13:22.359 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 0, 00:13:22.359 "data_size": 65536 00:13:22.359 } 00:13:22.359 ] 00:13:22.359 }' 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.359 18:53:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.879 131.50 IOPS, 394.50 MiB/s [2024-11-28T18:53:52.485Z] 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.879 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.879 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.880 "name": "raid_bdev1", 00:13:22.880 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:22.880 "strip_size_kb": 0, 00:13:22.880 "state": "online", 00:13:22.880 "raid_level": "raid1", 00:13:22.880 "superblock": false, 00:13:22.880 "num_base_bdevs": 4, 00:13:22.880 "num_base_bdevs_discovered": 3, 00:13:22.880 "num_base_bdevs_operational": 3, 00:13:22.880 "base_bdevs_list": [ 00:13:22.880 { 00:13:22.880 "name": null, 00:13:22.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.880 "is_configured": false, 00:13:22.880 "data_offset": 0, 00:13:22.880 "data_size": 65536 00:13:22.880 }, 00:13:22.880 { 00:13:22.880 "name": "BaseBdev2", 00:13:22.880 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:22.880 "is_configured": true, 00:13:22.880 "data_offset": 0, 00:13:22.880 "data_size": 65536 00:13:22.880 }, 00:13:22.880 { 00:13:22.880 "name": "BaseBdev3", 00:13:22.880 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:22.880 "is_configured": true, 00:13:22.880 "data_offset": 0, 00:13:22.880 "data_size": 65536 00:13:22.880 }, 00:13:22.880 { 00:13:22.880 "name": "BaseBdev4", 00:13:22.880 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:22.880 "is_configured": true, 00:13:22.880 "data_offset": 0, 00:13:22.880 "data_size": 65536 00:13:22.880 } 00:13:22.880 ] 00:13:22.880 }' 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.880 [2024-11-28 18:53:52.423721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.880 18:53:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:22.880 [2024-11-28 18:53:52.459190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:22.880 [2024-11-28 18:53:52.461203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.140 [2024-11-28 18:53:52.574460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.140 [2024-11-28 18:53:52.575775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.400 [2024-11-28 18:53:52.797292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.400 [2024-11-28 18:53:52.797945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.970 148.00 IOPS, 444.00 MiB/s [2024-11-28T18:53:53.576Z] [2024-11-28 18:53:53.275693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.970 "name": "raid_bdev1", 00:13:23.970 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:23.970 "strip_size_kb": 0, 00:13:23.970 "state": "online", 00:13:23.970 "raid_level": "raid1", 00:13:23.970 "superblock": false, 00:13:23.970 "num_base_bdevs": 4, 00:13:23.970 "num_base_bdevs_discovered": 4, 00:13:23.970 "num_base_bdevs_operational": 4, 00:13:23.970 "process": { 00:13:23.970 "type": "rebuild", 00:13:23.970 "target": "spare", 00:13:23.970 "progress": { 00:13:23.970 "blocks": 12288, 00:13:23.970 "percent": 18 00:13:23.970 } 00:13:23.970 }, 00:13:23.970 "base_bdevs_list": [ 00:13:23.970 { 00:13:23.970 "name": "spare", 00:13:23.970 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:23.970 "is_configured": true, 00:13:23.970 "data_offset": 0, 00:13:23.970 "data_size": 65536 00:13:23.970 }, 00:13:23.970 { 00:13:23.970 "name": "BaseBdev2", 00:13:23.970 "uuid": "9bbde104-1d02-585a-a357-43dddafff8e5", 00:13:23.970 "is_configured": true, 00:13:23.970 "data_offset": 0, 00:13:23.970 "data_size": 65536 00:13:23.970 }, 00:13:23.970 { 00:13:23.970 "name": "BaseBdev3", 00:13:23.970 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:23.970 "is_configured": true, 00:13:23.970 "data_offset": 0, 00:13:23.970 "data_size": 65536 00:13:23.970 }, 00:13:23.970 { 00:13:23.970 "name": "BaseBdev4", 00:13:23.970 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:23.970 "is_configured": true, 00:13:23.970 "data_offset": 0, 00:13:23.970 "data_size": 65536 00:13:23.970 } 00:13:23.970 ] 00:13:23.970 }' 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.970 [2024-11-28 18:53:53.520866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.970 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.231 [2024-11-28 18:53:53.616057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.231 [2024-11-28 18:53:53.630042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:24.231 [2024-11-28 18:53:53.651943] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:24.231 [2024-11-28 18:53:53.651978] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:24.231 [2024-11-28 18:53:53.653571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.231 "name": "raid_bdev1", 00:13:24.231 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:24.231 "strip_size_kb": 0, 00:13:24.231 "state": "online", 00:13:24.231 "raid_level": "raid1", 00:13:24.231 "superblock": false, 00:13:24.231 "num_base_bdevs": 4, 00:13:24.231 "num_base_bdevs_discovered": 3, 00:13:24.231 "num_base_bdevs_operational": 3, 00:13:24.231 "process": { 00:13:24.231 "type": "rebuild", 00:13:24.231 "target": "spare", 00:13:24.231 "progress": { 00:13:24.231 "blocks": 16384, 00:13:24.231 "percent": 25 00:13:24.231 } 00:13:24.231 }, 00:13:24.231 "base_bdevs_list": [ 00:13:24.231 { 00:13:24.231 "name": "spare", 00:13:24.231 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:24.231 "is_configured": true, 00:13:24.231 "data_offset": 0, 00:13:24.231 "data_size": 65536 00:13:24.231 }, 00:13:24.231 { 00:13:24.231 "name": null, 00:13:24.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.231 "is_configured": false, 00:13:24.231 "data_offset": 0, 00:13:24.231 "data_size": 65536 00:13:24.231 }, 00:13:24.231 { 00:13:24.231 "name": "BaseBdev3", 00:13:24.231 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:24.231 "is_configured": true, 00:13:24.231 "data_offset": 0, 00:13:24.231 "data_size": 65536 00:13:24.231 }, 00:13:24.231 { 00:13:24.231 "name": "BaseBdev4", 00:13:24.231 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:24.231 "is_configured": true, 00:13:24.231 "data_offset": 0, 00:13:24.231 "data_size": 65536 00:13:24.231 } 00:13:24.231 ] 00:13:24.231 }' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.231 18:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.492 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.492 "name": "raid_bdev1", 00:13:24.492 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:24.492 "strip_size_kb": 0, 00:13:24.492 "state": "online", 00:13:24.492 "raid_level": "raid1", 00:13:24.492 "superblock": false, 00:13:24.492 "num_base_bdevs": 4, 00:13:24.492 "num_base_bdevs_discovered": 3, 00:13:24.492 "num_base_bdevs_operational": 3, 00:13:24.492 "process": { 00:13:24.492 "type": "rebuild", 00:13:24.492 "target": "spare", 00:13:24.492 "progress": { 00:13:24.492 "blocks": 18432, 00:13:24.492 "percent": 28 00:13:24.492 } 00:13:24.492 }, 00:13:24.492 "base_bdevs_list": [ 00:13:24.492 { 00:13:24.492 "name": "spare", 00:13:24.492 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 0, 00:13:24.492 "data_size": 65536 00:13:24.492 }, 00:13:24.492 { 00:13:24.492 "name": null, 00:13:24.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.492 "is_configured": false, 00:13:24.492 "data_offset": 0, 00:13:24.492 "data_size": 65536 00:13:24.492 }, 00:13:24.492 { 00:13:24.492 "name": "BaseBdev3", 00:13:24.492 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 0, 00:13:24.492 "data_size": 65536 00:13:24.492 }, 00:13:24.492 { 00:13:24.493 "name": "BaseBdev4", 00:13:24.493 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:24.493 "is_configured": true, 00:13:24.493 "data_offset": 0, 00:13:24.493 "data_size": 65536 00:13:24.493 } 00:13:24.493 ] 00:13:24.493 }' 00:13:24.493 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.493 [2024-11-28 18:53:53.881769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:24.493 [2024-11-28 18:53:53.882565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:24.493 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.493 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.493 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.493 18:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.493 131.25 IOPS, 393.75 MiB/s [2024-11-28T18:53:54.099Z] [2024-11-28 18:53:54.089466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:24.493 [2024-11-28 18:53:54.089811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:25.063 [2024-11-28 18:53:54.427176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:25.063 [2024-11-28 18:53:54.529503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:25.323 [2024-11-28 18:53:54.874255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.592 18:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.592 116.40 IOPS, 349.20 MiB/s [2024-11-28T18:53:55.198Z] 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.592 "name": "raid_bdev1", 00:13:25.592 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:25.592 "strip_size_kb": 0, 00:13:25.593 "state": "online", 00:13:25.593 "raid_level": "raid1", 00:13:25.593 "superblock": false, 00:13:25.593 "num_base_bdevs": 4, 00:13:25.593 "num_base_bdevs_discovered": 3, 00:13:25.593 "num_base_bdevs_operational": 3, 00:13:25.593 "process": { 00:13:25.593 "type": "rebuild", 00:13:25.593 "target": "spare", 00:13:25.593 "progress": { 00:13:25.593 "blocks": 34816, 00:13:25.593 "percent": 53 00:13:25.593 } 00:13:25.593 }, 00:13:25.593 "base_bdevs_list": [ 00:13:25.593 { 00:13:25.593 "name": "spare", 00:13:25.593 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:25.593 "is_configured": true, 00:13:25.593 "data_offset": 0, 00:13:25.593 "data_size": 65536 00:13:25.593 }, 00:13:25.593 { 00:13:25.593 "name": null, 00:13:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.593 "is_configured": false, 00:13:25.593 "data_offset": 0, 00:13:25.593 "data_size": 65536 00:13:25.593 }, 00:13:25.593 { 00:13:25.593 "name": "BaseBdev3", 00:13:25.593 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:25.593 "is_configured": true, 00:13:25.593 "data_offset": 0, 00:13:25.593 "data_size": 65536 00:13:25.593 }, 00:13:25.593 { 00:13:25.593 "name": "BaseBdev4", 00:13:25.593 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:25.593 "is_configured": true, 00:13:25.593 "data_offset": 0, 00:13:25.593 "data_size": 65536 00:13:25.593 } 00:13:25.593 ] 00:13:25.593 }' 00:13:25.593 18:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.593 18:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.593 18:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.593 18:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.593 18:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.166 [2024-11-28 18:53:55.556083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:26.166 [2024-11-28 18:53:55.770578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:26.686 102.17 IOPS, 306.50 MiB/s [2024-11-28T18:53:56.292Z] 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.686 "name": "raid_bdev1", 00:13:26.686 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:26.686 "strip_size_kb": 0, 00:13:26.686 "state": "online", 00:13:26.686 "raid_level": "raid1", 00:13:26.686 "superblock": false, 00:13:26.686 "num_base_bdevs": 4, 00:13:26.686 "num_base_bdevs_discovered": 3, 00:13:26.686 "num_base_bdevs_operational": 3, 00:13:26.686 "process": { 00:13:26.686 "type": "rebuild", 00:13:26.686 "target": "spare", 00:13:26.686 "progress": { 00:13:26.686 "blocks": 51200, 00:13:26.686 "percent": 78 00:13:26.686 } 00:13:26.686 }, 00:13:26.686 "base_bdevs_list": [ 00:13:26.686 { 00:13:26.686 "name": "spare", 00:13:26.686 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:26.686 "is_configured": true, 00:13:26.686 "data_offset": 0, 00:13:26.686 "data_size": 65536 00:13:26.686 }, 00:13:26.686 { 00:13:26.686 "name": null, 00:13:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.686 "is_configured": false, 00:13:26.686 "data_offset": 0, 00:13:26.686 "data_size": 65536 00:13:26.686 }, 00:13:26.686 { 00:13:26.686 "name": "BaseBdev3", 00:13:26.686 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:26.686 "is_configured": true, 00:13:26.686 "data_offset": 0, 00:13:26.686 "data_size": 65536 00:13:26.686 }, 00:13:26.686 { 00:13:26.686 "name": "BaseBdev4", 00:13:26.686 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:26.686 "is_configured": true, 00:13:26.686 "data_offset": 0, 00:13:26.686 "data_size": 65536 00:13:26.686 } 00:13:26.686 ] 00:13:26.686 }' 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.686 [2024-11-28 18:53:56.203555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.686 18:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.949 [2024-11-28 18:53:56.524177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:27.596 93.29 IOPS, 279.86 MiB/s [2024-11-28T18:53:57.202Z] [2024-11-28 18:53:57.059792] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.596 [2024-11-28 18:53:57.165164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.596 [2024-11-28 18:53:57.167343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.873 "name": "raid_bdev1", 00:13:27.873 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:27.873 "strip_size_kb": 0, 00:13:27.873 "state": "online", 00:13:27.873 "raid_level": "raid1", 00:13:27.873 "superblock": false, 00:13:27.873 "num_base_bdevs": 4, 00:13:27.873 "num_base_bdevs_discovered": 3, 00:13:27.873 "num_base_bdevs_operational": 3, 00:13:27.873 "base_bdevs_list": [ 00:13:27.873 { 00:13:27.873 "name": "spare", 00:13:27.873 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": null, 00:13:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.873 "is_configured": false, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": "BaseBdev3", 00:13:27.873 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": "BaseBdev4", 00:13:27.873 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 } 00:13:27.873 ] 00:13:27.873 }' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.873 "name": "raid_bdev1", 00:13:27.873 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:27.873 "strip_size_kb": 0, 00:13:27.873 "state": "online", 00:13:27.873 "raid_level": "raid1", 00:13:27.873 "superblock": false, 00:13:27.873 "num_base_bdevs": 4, 00:13:27.873 "num_base_bdevs_discovered": 3, 00:13:27.873 "num_base_bdevs_operational": 3, 00:13:27.873 "base_bdevs_list": [ 00:13:27.873 { 00:13:27.873 "name": "spare", 00:13:27.873 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": null, 00:13:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.873 "is_configured": false, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": "BaseBdev3", 00:13:27.873 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": "BaseBdev4", 00:13:27.873 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 0, 00:13:27.873 "data_size": 65536 00:13:27.873 } 00:13:27.873 ] 00:13:27.873 }' 00:13:27.873 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.133 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.134 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.134 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.134 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.134 "name": "raid_bdev1", 00:13:28.134 "uuid": "87dc14f0-6ad1-4332-a9f0-23569d1fb915", 00:13:28.134 "strip_size_kb": 0, 00:13:28.134 "state": "online", 00:13:28.134 "raid_level": "raid1", 00:13:28.134 "superblock": false, 00:13:28.134 "num_base_bdevs": 4, 00:13:28.134 "num_base_bdevs_discovered": 3, 00:13:28.134 "num_base_bdevs_operational": 3, 00:13:28.134 "base_bdevs_list": [ 00:13:28.134 { 00:13:28.134 "name": "spare", 00:13:28.134 "uuid": "fb041668-0dae-5e48-a1e0-958012ed0f44", 00:13:28.134 "is_configured": true, 00:13:28.134 "data_offset": 0, 00:13:28.134 "data_size": 65536 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": null, 00:13:28.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.134 "is_configured": false, 00:13:28.134 "data_offset": 0, 00:13:28.134 "data_size": 65536 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": "BaseBdev3", 00:13:28.134 "uuid": "fe0f61c0-aed7-559e-96c8-b3b4187bf7cc", 00:13:28.134 "is_configured": true, 00:13:28.134 "data_offset": 0, 00:13:28.134 "data_size": 65536 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": "BaseBdev4", 00:13:28.134 "uuid": "e6400102-4067-5fa5-a1ed-bb3f83eab428", 00:13:28.134 "is_configured": true, 00:13:28.134 "data_offset": 0, 00:13:28.134 "data_size": 65536 00:13:28.134 } 00:13:28.134 ] 00:13:28.134 }' 00:13:28.134 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.134 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.393 85.50 IOPS, 256.50 MiB/s [2024-11-28T18:53:57.999Z] 18:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.393 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.393 18:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.393 [2024-11-28 18:53:57.975185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.393 [2024-11-28 18:53:57.975224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.653 00:13:28.653 Latency(us) 00:13:28.653 [2024-11-28T18:53:58.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.653 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:28.653 raid_bdev1 : 8.08 84.94 254.82 0.00 0.00 16561.89 289.18 116071.78 00:13:28.653 [2024-11-28T18:53:58.259Z] =================================================================================================================== 00:13:28.653 [2024-11-28T18:53:58.259Z] Total : 84.94 254.82 0.00 0.00 16561.89 289.18 116071.78 00:13:28.653 [2024-11-28 18:53:58.029380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.653 [2024-11-28 18:53:58.029467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.653 { 00:13:28.653 "results": [ 00:13:28.653 { 00:13:28.653 "job": "raid_bdev1", 00:13:28.653 "core_mask": "0x1", 00:13:28.653 "workload": "randrw", 00:13:28.653 "percentage": 50, 00:13:28.653 "status": "finished", 00:13:28.653 "queue_depth": 2, 00:13:28.653 "io_size": 3145728, 00:13:28.653 "runtime": 8.07628, 00:13:28.653 "iops": 84.94009618289608, 00:13:28.653 "mibps": 254.82028854868827, 00:13:28.653 "io_failed": 0, 00:13:28.653 "io_timeout": 0, 00:13:28.653 "avg_latency_us": 16561.894817570166, 00:13:28.653 "min_latency_us": 289.1798134751155, 00:13:28.653 "max_latency_us": 116071.77895929574 00:13:28.653 } 00:13:28.653 ], 00:13:28.653 "core_count": 1 00:13:28.653 } 00:13:28.653 [2024-11-28 18:53:58.029565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.653 [2024-11-28 18:53:58.029582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.653 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:28.913 /dev/nbd0 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.913 1+0 records in 00:13:28.913 1+0 records out 00:13:28.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280645 s, 14.6 MB/s 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:28.913 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.914 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:29.173 /dev/nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.173 1+0 records in 00:13:29.173 1+0 records out 00:13:29.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435273 s, 9.4 MB/s 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.173 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.433 18:53:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:29.692 /dev/nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.693 1+0 records in 00:13:29.693 1+0 records out 00:13:29.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371001 s, 11.0 MB/s 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.693 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:29.952 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.953 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:29.953 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.953 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:29.953 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.953 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 90836 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 90836 ']' 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 90836 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90836 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.212 killing process with pid 90836 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90836' 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 90836 00:13:30.212 Received shutdown signal, test time was about 9.668393 seconds 00:13:30.212 00:13:30.212 Latency(us) 00:13:30.212 [2024-11-28T18:53:59.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.212 [2024-11-28T18:53:59.818Z] =================================================================================================================== 00:13:30.212 [2024-11-28T18:53:59.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.212 [2024-11-28 18:53:59.618568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.212 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 90836 00:13:30.212 [2024-11-28 18:53:59.663852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:30.473 00:13:30.473 real 0m11.677s 00:13:30.473 user 0m15.158s 00:13:30.473 sys 0m1.912s 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.473 ************************************ 00:13:30.473 END TEST raid_rebuild_test_io 00:13:30.473 ************************************ 00:13:30.473 18:53:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:30.473 18:53:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:30.473 18:53:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.473 18:53:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:30.473 ************************************ 00:13:30.473 START TEST raid_rebuild_test_sb_io 00:13:30.473 ************************************ 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91228 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91228 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91228 ']' 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.473 18:53:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.473 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:30.473 Zero copy mechanism will not be used. 00:13:30.473 [2024-11-28 18:54:00.071380] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:30.473 [2024-11-28 18:54:00.071522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91228 ] 00:13:30.733 [2024-11-28 18:54:00.210795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.733 [2024-11-28 18:54:00.247504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.733 [2024-11-28 18:54:00.273332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.733 [2024-11-28 18:54:00.316772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.733 [2024-11-28 18:54:00.316811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.304 BaseBdev1_malloc 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.304 [2024-11-28 18:54:00.886365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:31.304 [2024-11-28 18:54:00.886422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.304 [2024-11-28 18:54:00.886456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:31.304 [2024-11-28 18:54:00.886469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.304 [2024-11-28 18:54:00.888529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.304 [2024-11-28 18:54:00.888566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.304 BaseBdev1 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.304 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.304 BaseBdev2_malloc 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 [2024-11-28 18:54:00.915229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:31.565 [2024-11-28 18:54:00.915279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.565 [2024-11-28 18:54:00.915297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:31.565 [2024-11-28 18:54:00.915306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.565 [2024-11-28 18:54:00.917317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.565 [2024-11-28 18:54:00.917363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.565 BaseBdev2 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 BaseBdev3_malloc 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 [2024-11-28 18:54:00.943955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:31.565 [2024-11-28 18:54:00.944004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.565 [2024-11-28 18:54:00.944022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:31.565 [2024-11-28 18:54:00.944032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.565 [2024-11-28 18:54:00.946014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.565 [2024-11-28 18:54:00.946058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.565 BaseBdev3 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 BaseBdev4_malloc 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 [2024-11-28 18:54:00.995413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:31.565 [2024-11-28 18:54:00.995538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.565 [2024-11-28 18:54:00.995581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:31.565 [2024-11-28 18:54:00.995606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.565 [2024-11-28 18:54:00.999702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.565 [2024-11-28 18:54:00.999751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:31.565 BaseBdev4 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 spare_malloc 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 spare_delay 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.565 [2024-11-28 18:54:01.037845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.565 [2024-11-28 18:54:01.037890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.565 [2024-11-28 18:54:01.037906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:31.565 [2024-11-28 18:54:01.037916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.565 [2024-11-28 18:54:01.039919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.565 [2024-11-28 18:54:01.039954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.565 spare 00:13:31.565 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.566 [2024-11-28 18:54:01.049918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.566 [2024-11-28 18:54:01.051660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.566 [2024-11-28 18:54:01.051720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.566 [2024-11-28 18:54:01.051758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.566 [2024-11-28 18:54:01.052006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:31.566 [2024-11-28 18:54:01.052038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.566 [2024-11-28 18:54:01.052282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:31.566 [2024-11-28 18:54:01.052451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:31.566 [2024-11-28 18:54:01.052471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:31.566 [2024-11-28 18:54:01.052613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.566 "name": "raid_bdev1", 00:13:31.566 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:31.566 "strip_size_kb": 0, 00:13:31.566 "state": "online", 00:13:31.566 "raid_level": "raid1", 00:13:31.566 "superblock": true, 00:13:31.566 "num_base_bdevs": 4, 00:13:31.566 "num_base_bdevs_discovered": 4, 00:13:31.566 "num_base_bdevs_operational": 4, 00:13:31.566 "base_bdevs_list": [ 00:13:31.566 { 00:13:31.566 "name": "BaseBdev1", 00:13:31.566 "uuid": "379d95ee-16a7-5506-8256-5154b9e19d60", 00:13:31.566 "is_configured": true, 00:13:31.566 "data_offset": 2048, 00:13:31.566 "data_size": 63488 00:13:31.566 }, 00:13:31.566 { 00:13:31.566 "name": "BaseBdev2", 00:13:31.566 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:31.566 "is_configured": true, 00:13:31.566 "data_offset": 2048, 00:13:31.566 "data_size": 63488 00:13:31.566 }, 00:13:31.566 { 00:13:31.566 "name": "BaseBdev3", 00:13:31.566 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:31.566 "is_configured": true, 00:13:31.566 "data_offset": 2048, 00:13:31.566 "data_size": 63488 00:13:31.566 }, 00:13:31.566 { 00:13:31.566 "name": "BaseBdev4", 00:13:31.566 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:31.566 "is_configured": true, 00:13:31.566 "data_offset": 2048, 00:13:31.566 "data_size": 63488 00:13:31.566 } 00:13:31.566 ] 00:13:31.566 }' 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.566 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 [2024-11-28 18:54:01.510254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.137 [2024-11-28 18:54:01.605996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.137 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.138 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.138 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.138 "name": "raid_bdev1", 00:13:32.138 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:32.138 "strip_size_kb": 0, 00:13:32.138 "state": "online", 00:13:32.138 "raid_level": "raid1", 00:13:32.138 "superblock": true, 00:13:32.138 "num_base_bdevs": 4, 00:13:32.138 "num_base_bdevs_discovered": 3, 00:13:32.138 "num_base_bdevs_operational": 3, 00:13:32.138 "base_bdevs_list": [ 00:13:32.138 { 00:13:32.138 "name": null, 00:13:32.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.138 "is_configured": false, 00:13:32.138 "data_offset": 0, 00:13:32.138 "data_size": 63488 00:13:32.138 }, 00:13:32.138 { 00:13:32.138 "name": "BaseBdev2", 00:13:32.138 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:32.138 "is_configured": true, 00:13:32.138 "data_offset": 2048, 00:13:32.138 "data_size": 63488 00:13:32.138 }, 00:13:32.138 { 00:13:32.138 "name": "BaseBdev3", 00:13:32.138 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:32.138 "is_configured": true, 00:13:32.138 "data_offset": 2048, 00:13:32.138 "data_size": 63488 00:13:32.138 }, 00:13:32.138 { 00:13:32.138 "name": "BaseBdev4", 00:13:32.138 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:32.138 "is_configured": true, 00:13:32.138 "data_offset": 2048, 00:13:32.138 "data_size": 63488 00:13:32.138 } 00:13:32.138 ] 00:13:32.138 }' 00:13:32.138 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.138 18:54:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.138 [2024-11-28 18:54:01.696025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:32.138 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.138 Zero copy mechanism will not be used. 00:13:32.138 Running I/O for 60 seconds... 00:13:32.709 18:54:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.709 18:54:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.709 18:54:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.709 [2024-11-28 18:54:02.100937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.709 18:54:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.709 18:54:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:32.709 [2024-11-28 18:54:02.148301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:32.709 [2024-11-28 18:54:02.150358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.709 [2024-11-28 18:54:02.265214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.709 [2024-11-28 18:54:02.265736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.969 [2024-11-28 18:54:02.482857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.969 [2024-11-28 18:54:02.483105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:33.229 168.00 IOPS, 504.00 MiB/s [2024-11-28T18:54:02.835Z] [2024-11-28 18:54:02.828927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:33.489 [2024-11-28 18:54:03.045460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.750 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.750 "name": "raid_bdev1", 00:13:33.750 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:33.750 "strip_size_kb": 0, 00:13:33.750 "state": "online", 00:13:33.750 "raid_level": "raid1", 00:13:33.750 "superblock": true, 00:13:33.750 "num_base_bdevs": 4, 00:13:33.750 "num_base_bdevs_discovered": 4, 00:13:33.750 "num_base_bdevs_operational": 4, 00:13:33.750 "process": { 00:13:33.750 "type": "rebuild", 00:13:33.750 "target": "spare", 00:13:33.750 "progress": { 00:13:33.750 "blocks": 10240, 00:13:33.750 "percent": 16 00:13:33.750 } 00:13:33.750 }, 00:13:33.750 "base_bdevs_list": [ 00:13:33.750 { 00:13:33.750 "name": "spare", 00:13:33.751 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:33.751 "is_configured": true, 00:13:33.751 "data_offset": 2048, 00:13:33.751 "data_size": 63488 00:13:33.751 }, 00:13:33.751 { 00:13:33.751 "name": "BaseBdev2", 00:13:33.751 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:33.751 "is_configured": true, 00:13:33.751 "data_offset": 2048, 00:13:33.751 "data_size": 63488 00:13:33.751 }, 00:13:33.751 { 00:13:33.751 "name": "BaseBdev3", 00:13:33.751 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:33.751 "is_configured": true, 00:13:33.751 "data_offset": 2048, 00:13:33.751 "data_size": 63488 00:13:33.751 }, 00:13:33.751 { 00:13:33.751 "name": "BaseBdev4", 00:13:33.751 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:33.751 "is_configured": true, 00:13:33.751 "data_offset": 2048, 00:13:33.751 "data_size": 63488 00:13:33.751 } 00:13:33.751 ] 00:13:33.751 }' 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.751 [2024-11-28 18:54:03.267700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.751 [2024-11-28 18:54:03.268061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.751 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.751 [2024-11-28 18:54:03.282614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.012 [2024-11-28 18:54:03.377546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:34.012 [2024-11-28 18:54:03.496215] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.012 [2024-11-28 18:54:03.500144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.012 [2024-11-28 18:54:03.500193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.012 [2024-11-28 18:54:03.500206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.012 [2024-11-28 18:54:03.523568] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.012 "name": "raid_bdev1", 00:13:34.012 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:34.012 "strip_size_kb": 0, 00:13:34.012 "state": "online", 00:13:34.012 "raid_level": "raid1", 00:13:34.012 "superblock": true, 00:13:34.012 "num_base_bdevs": 4, 00:13:34.012 "num_base_bdevs_discovered": 3, 00:13:34.012 "num_base_bdevs_operational": 3, 00:13:34.012 "base_bdevs_list": [ 00:13:34.012 { 00:13:34.012 "name": null, 00:13:34.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.012 "is_configured": false, 00:13:34.012 "data_offset": 0, 00:13:34.012 "data_size": 63488 00:13:34.012 }, 00:13:34.012 { 00:13:34.012 "name": "BaseBdev2", 00:13:34.012 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:34.012 "is_configured": true, 00:13:34.012 "data_offset": 2048, 00:13:34.012 "data_size": 63488 00:13:34.012 }, 00:13:34.012 { 00:13:34.012 "name": "BaseBdev3", 00:13:34.012 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:34.012 "is_configured": true, 00:13:34.012 "data_offset": 2048, 00:13:34.012 "data_size": 63488 00:13:34.012 }, 00:13:34.012 { 00:13:34.012 "name": "BaseBdev4", 00:13:34.012 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:34.012 "is_configured": true, 00:13:34.012 "data_offset": 2048, 00:13:34.012 "data_size": 63488 00:13:34.012 } 00:13:34.012 ] 00:13:34.012 }' 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.012 18:54:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.533 153.50 IOPS, 460.50 MiB/s [2024-11-28T18:54:04.139Z] 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.533 "name": "raid_bdev1", 00:13:34.533 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:34.533 "strip_size_kb": 0, 00:13:34.533 "state": "online", 00:13:34.533 "raid_level": "raid1", 00:13:34.533 "superblock": true, 00:13:34.533 "num_base_bdevs": 4, 00:13:34.533 "num_base_bdevs_discovered": 3, 00:13:34.533 "num_base_bdevs_operational": 3, 00:13:34.533 "base_bdevs_list": [ 00:13:34.533 { 00:13:34.533 "name": null, 00:13:34.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.533 "is_configured": false, 00:13:34.533 "data_offset": 0, 00:13:34.533 "data_size": 63488 00:13:34.533 }, 00:13:34.533 { 00:13:34.533 "name": "BaseBdev2", 00:13:34.533 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:34.533 "is_configured": true, 00:13:34.533 "data_offset": 2048, 00:13:34.533 "data_size": 63488 00:13:34.533 }, 00:13:34.533 { 00:13:34.533 "name": "BaseBdev3", 00:13:34.533 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:34.533 "is_configured": true, 00:13:34.533 "data_offset": 2048, 00:13:34.533 "data_size": 63488 00:13:34.533 }, 00:13:34.533 { 00:13:34.533 "name": "BaseBdev4", 00:13:34.533 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:34.533 "is_configured": true, 00:13:34.533 "data_offset": 2048, 00:13:34.533 "data_size": 63488 00:13:34.533 } 00:13:34.533 ] 00:13:34.533 }' 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.533 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.793 [2024-11-28 18:54:04.137964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.793 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.793 18:54:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:34.793 [2024-11-28 18:54:04.187053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:34.793 [2024-11-28 18:54:04.188938] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.793 [2024-11-28 18:54:04.304855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.793 [2024-11-28 18:54:04.305386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.054 [2024-11-28 18:54:04.522316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.054 [2024-11-28 18:54:04.523000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.314 174.00 IOPS, 522.00 MiB/s [2024-11-28T18:54:04.920Z] [2024-11-28 18:54:04.854224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:35.314 [2024-11-28 18:54:04.855404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:35.574 [2024-11-28 18:54:05.077254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:35.574 [2024-11-28 18:54:05.077904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.574 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.834 "name": "raid_bdev1", 00:13:35.834 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:35.834 "strip_size_kb": 0, 00:13:35.834 "state": "online", 00:13:35.834 "raid_level": "raid1", 00:13:35.834 "superblock": true, 00:13:35.834 "num_base_bdevs": 4, 00:13:35.834 "num_base_bdevs_discovered": 4, 00:13:35.834 "num_base_bdevs_operational": 4, 00:13:35.834 "process": { 00:13:35.834 "type": "rebuild", 00:13:35.834 "target": "spare", 00:13:35.834 "progress": { 00:13:35.834 "blocks": 10240, 00:13:35.834 "percent": 16 00:13:35.834 } 00:13:35.834 }, 00:13:35.834 "base_bdevs_list": [ 00:13:35.834 { 00:13:35.834 "name": "spare", 00:13:35.834 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:35.834 "is_configured": true, 00:13:35.834 "data_offset": 2048, 00:13:35.834 "data_size": 63488 00:13:35.834 }, 00:13:35.834 { 00:13:35.834 "name": "BaseBdev2", 00:13:35.834 "uuid": "e84e28fc-9748-5c5f-808f-978ad76e56db", 00:13:35.834 "is_configured": true, 00:13:35.834 "data_offset": 2048, 00:13:35.834 "data_size": 63488 00:13:35.834 }, 00:13:35.834 { 00:13:35.834 "name": "BaseBdev3", 00:13:35.834 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:35.834 "is_configured": true, 00:13:35.834 "data_offset": 2048, 00:13:35.834 "data_size": 63488 00:13:35.834 }, 00:13:35.834 { 00:13:35.834 "name": "BaseBdev4", 00:13:35.834 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:35.834 "is_configured": true, 00:13:35.834 "data_offset": 2048, 00:13:35.834 "data_size": 63488 00:13:35.834 } 00:13:35.834 ] 00:13:35.834 }' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:35.834 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.834 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.834 [2024-11-28 18:54:05.321061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.834 [2024-11-28 18:54:05.406498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:36.094 [2024-11-28 18:54:05.605998] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:36.094 [2024-11-28 18:54:05.606032] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.094 "name": "raid_bdev1", 00:13:36.094 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:36.094 "strip_size_kb": 0, 00:13:36.094 "state": "online", 00:13:36.094 "raid_level": "raid1", 00:13:36.094 "superblock": true, 00:13:36.094 "num_base_bdevs": 4, 00:13:36.094 "num_base_bdevs_discovered": 3, 00:13:36.094 "num_base_bdevs_operational": 3, 00:13:36.094 "process": { 00:13:36.094 "type": "rebuild", 00:13:36.094 "target": "spare", 00:13:36.094 "progress": { 00:13:36.094 "blocks": 14336, 00:13:36.094 "percent": 22 00:13:36.094 } 00:13:36.094 }, 00:13:36.094 "base_bdevs_list": [ 00:13:36.094 { 00:13:36.094 "name": "spare", 00:13:36.094 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:36.094 "is_configured": true, 00:13:36.094 "data_offset": 2048, 00:13:36.094 "data_size": 63488 00:13:36.094 }, 00:13:36.094 { 00:13:36.094 "name": null, 00:13:36.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.094 "is_configured": false, 00:13:36.094 "data_offset": 0, 00:13:36.094 "data_size": 63488 00:13:36.094 }, 00:13:36.094 { 00:13:36.094 "name": "BaseBdev3", 00:13:36.094 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:36.094 "is_configured": true, 00:13:36.094 "data_offset": 2048, 00:13:36.094 "data_size": 63488 00:13:36.094 }, 00:13:36.094 { 00:13:36.094 "name": "BaseBdev4", 00:13:36.094 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:36.094 "is_configured": true, 00:13:36.094 "data_offset": 2048, 00:13:36.094 "data_size": 63488 00:13:36.094 } 00:13:36.094 ] 00:13:36.094 }' 00:13:36.094 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.354 146.75 IOPS, 440.25 MiB/s [2024-11-28T18:54:05.960Z] 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.354 [2024-11-28 18:54:05.723414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:36.354 [2024-11-28 18:54:05.723664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.354 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.355 "name": "raid_bdev1", 00:13:36.355 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:36.355 "strip_size_kb": 0, 00:13:36.355 "state": "online", 00:13:36.355 "raid_level": "raid1", 00:13:36.355 "superblock": true, 00:13:36.355 "num_base_bdevs": 4, 00:13:36.355 "num_base_bdevs_discovered": 3, 00:13:36.355 "num_base_bdevs_operational": 3, 00:13:36.355 "process": { 00:13:36.355 "type": "rebuild", 00:13:36.355 "target": "spare", 00:13:36.355 "progress": { 00:13:36.355 "blocks": 16384, 00:13:36.355 "percent": 25 00:13:36.355 } 00:13:36.355 }, 00:13:36.355 "base_bdevs_list": [ 00:13:36.355 { 00:13:36.355 "name": "spare", 00:13:36.355 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:36.355 "is_configured": true, 00:13:36.355 "data_offset": 2048, 00:13:36.355 "data_size": 63488 00:13:36.355 }, 00:13:36.355 { 00:13:36.355 "name": null, 00:13:36.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.355 "is_configured": false, 00:13:36.355 "data_offset": 0, 00:13:36.355 "data_size": 63488 00:13:36.355 }, 00:13:36.355 { 00:13:36.355 "name": "BaseBdev3", 00:13:36.355 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:36.355 "is_configured": true, 00:13:36.355 "data_offset": 2048, 00:13:36.355 "data_size": 63488 00:13:36.355 }, 00:13:36.355 { 00:13:36.355 "name": "BaseBdev4", 00:13:36.355 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:36.355 "is_configured": true, 00:13:36.355 "data_offset": 2048, 00:13:36.355 "data_size": 63488 00:13:36.355 } 00:13:36.355 ] 00:13:36.355 }' 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.355 18:54:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.355 [2024-11-28 18:54:05.945568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:36.615 [2024-11-28 18:54:06.159990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.184 [2024-11-28 18:54:06.485486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:37.444 128.80 IOPS, 386.40 MiB/s [2024-11-28T18:54:07.050Z] 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.444 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.444 "name": "raid_bdev1", 00:13:37.444 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:37.444 "strip_size_kb": 0, 00:13:37.444 "state": "online", 00:13:37.444 "raid_level": "raid1", 00:13:37.444 "superblock": true, 00:13:37.444 "num_base_bdevs": 4, 00:13:37.444 "num_base_bdevs_discovered": 3, 00:13:37.444 "num_base_bdevs_operational": 3, 00:13:37.444 "process": { 00:13:37.444 "type": "rebuild", 00:13:37.444 "target": "spare", 00:13:37.444 "progress": { 00:13:37.444 "blocks": 32768, 00:13:37.444 "percent": 51 00:13:37.445 } 00:13:37.445 }, 00:13:37.445 "base_bdevs_list": [ 00:13:37.445 { 00:13:37.445 "name": "spare", 00:13:37.445 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:37.445 "is_configured": true, 00:13:37.445 "data_offset": 2048, 00:13:37.445 "data_size": 63488 00:13:37.445 }, 00:13:37.445 { 00:13:37.445 "name": null, 00:13:37.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.445 "is_configured": false, 00:13:37.445 "data_offset": 0, 00:13:37.445 "data_size": 63488 00:13:37.445 }, 00:13:37.445 { 00:13:37.445 "name": "BaseBdev3", 00:13:37.445 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:37.445 "is_configured": true, 00:13:37.445 "data_offset": 2048, 00:13:37.445 "data_size": 63488 00:13:37.445 }, 00:13:37.445 { 00:13:37.445 "name": "BaseBdev4", 00:13:37.445 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:37.445 "is_configured": true, 00:13:37.445 "data_offset": 2048, 00:13:37.445 "data_size": 63488 00:13:37.445 } 00:13:37.445 ] 00:13:37.445 }' 00:13:37.445 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.445 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.445 18:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.445 18:54:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.445 18:54:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.705 [2024-11-28 18:54:07.216705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:37.965 [2024-11-28 18:54:07.545997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:38.485 114.00 IOPS, 342.00 MiB/s [2024-11-28T18:54:08.091Z] 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.485 "name": "raid_bdev1", 00:13:38.485 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:38.485 "strip_size_kb": 0, 00:13:38.485 "state": "online", 00:13:38.485 "raid_level": "raid1", 00:13:38.485 "superblock": true, 00:13:38.485 "num_base_bdevs": 4, 00:13:38.485 "num_base_bdevs_discovered": 3, 00:13:38.485 "num_base_bdevs_operational": 3, 00:13:38.485 "process": { 00:13:38.485 "type": "rebuild", 00:13:38.485 "target": "spare", 00:13:38.485 "progress": { 00:13:38.485 "blocks": 51200, 00:13:38.485 "percent": 80 00:13:38.485 } 00:13:38.485 }, 00:13:38.485 "base_bdevs_list": [ 00:13:38.485 { 00:13:38.485 "name": "spare", 00:13:38.485 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:38.485 "is_configured": true, 00:13:38.485 "data_offset": 2048, 00:13:38.485 "data_size": 63488 00:13:38.485 }, 00:13:38.485 { 00:13:38.485 "name": null, 00:13:38.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.485 "is_configured": false, 00:13:38.485 "data_offset": 0, 00:13:38.485 "data_size": 63488 00:13:38.485 }, 00:13:38.485 { 00:13:38.485 "name": "BaseBdev3", 00:13:38.485 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:38.485 "is_configured": true, 00:13:38.485 "data_offset": 2048, 00:13:38.485 "data_size": 63488 00:13:38.485 }, 00:13:38.485 { 00:13:38.485 "name": "BaseBdev4", 00:13:38.485 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:38.485 "is_configured": true, 00:13:38.485 "data_offset": 2048, 00:13:38.485 "data_size": 63488 00:13:38.485 } 00:13:38.485 ] 00:13:38.485 }' 00:13:38.485 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.485 [2024-11-28 18:54:08.084713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:38.745 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.745 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.745 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.745 18:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.315 102.29 IOPS, 306.86 MiB/s [2024-11-28T18:54:08.921Z] [2024-11-28 18:54:08.738784] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:39.315 [2024-11-28 18:54:08.838733] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:39.315 [2024-11-28 18:54:08.840935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.886 "name": "raid_bdev1", 00:13:39.886 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:39.886 "strip_size_kb": 0, 00:13:39.886 "state": "online", 00:13:39.886 "raid_level": "raid1", 00:13:39.886 "superblock": true, 00:13:39.886 "num_base_bdevs": 4, 00:13:39.886 "num_base_bdevs_discovered": 3, 00:13:39.886 "num_base_bdevs_operational": 3, 00:13:39.886 "base_bdevs_list": [ 00:13:39.886 { 00:13:39.886 "name": "spare", 00:13:39.886 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": null, 00:13:39.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.886 "is_configured": false, 00:13:39.886 "data_offset": 0, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": "BaseBdev3", 00:13:39.886 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": "BaseBdev4", 00:13:39.886 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 } 00:13:39.886 ] 00:13:39.886 }' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.886 "name": "raid_bdev1", 00:13:39.886 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:39.886 "strip_size_kb": 0, 00:13:39.886 "state": "online", 00:13:39.886 "raid_level": "raid1", 00:13:39.886 "superblock": true, 00:13:39.886 "num_base_bdevs": 4, 00:13:39.886 "num_base_bdevs_discovered": 3, 00:13:39.886 "num_base_bdevs_operational": 3, 00:13:39.886 "base_bdevs_list": [ 00:13:39.886 { 00:13:39.886 "name": "spare", 00:13:39.886 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": null, 00:13:39.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.886 "is_configured": false, 00:13:39.886 "data_offset": 0, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": "BaseBdev3", 00:13:39.886 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 }, 00:13:39.886 { 00:13:39.886 "name": "BaseBdev4", 00:13:39.886 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:39.886 "is_configured": true, 00:13:39.886 "data_offset": 2048, 00:13:39.886 "data_size": 63488 00:13:39.886 } 00:13:39.886 ] 00:13:39.886 }' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.886 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.146 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.146 "name": "raid_bdev1", 00:13:40.146 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:40.146 "strip_size_kb": 0, 00:13:40.146 "state": "online", 00:13:40.146 "raid_level": "raid1", 00:13:40.146 "superblock": true, 00:13:40.146 "num_base_bdevs": 4, 00:13:40.146 "num_base_bdevs_discovered": 3, 00:13:40.146 "num_base_bdevs_operational": 3, 00:13:40.146 "base_bdevs_list": [ 00:13:40.146 { 00:13:40.146 "name": "spare", 00:13:40.146 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:40.146 "is_configured": true, 00:13:40.146 "data_offset": 2048, 00:13:40.146 "data_size": 63488 00:13:40.146 }, 00:13:40.146 { 00:13:40.146 "name": null, 00:13:40.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.146 "is_configured": false, 00:13:40.146 "data_offset": 0, 00:13:40.146 "data_size": 63488 00:13:40.146 }, 00:13:40.147 { 00:13:40.147 "name": "BaseBdev3", 00:13:40.147 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:40.147 "is_configured": true, 00:13:40.147 "data_offset": 2048, 00:13:40.147 "data_size": 63488 00:13:40.147 }, 00:13:40.147 { 00:13:40.147 "name": "BaseBdev4", 00:13:40.147 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:40.147 "is_configured": true, 00:13:40.147 "data_offset": 2048, 00:13:40.147 "data_size": 63488 00:13:40.147 } 00:13:40.147 ] 00:13:40.147 }' 00:13:40.147 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.147 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.407 94.12 IOPS, 282.38 MiB/s [2024-11-28T18:54:10.013Z] 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.407 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.407 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.407 [2024-11-28 18:54:09.907141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.407 [2024-11-28 18:54:09.907181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.407 00:13:40.407 Latency(us) 00:13:40.407 [2024-11-28T18:54:10.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.407 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:40.407 raid_bdev1 : 8.22 92.74 278.22 0.00 0.00 13943.98 276.68 113786.90 00:13:40.407 [2024-11-28T18:54:10.013Z] =================================================================================================================== 00:13:40.407 [2024-11-28T18:54:10.013Z] Total : 92.74 278.22 0.00 0.00 13943.98 276.68 113786.90 00:13:40.407 [2024-11-28 18:54:09.918185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.407 [2024-11-28 18:54:09.918241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.408 [2024-11-28 18:54:09.918328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.408 [2024-11-28 18:54:09.918341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:40.408 { 00:13:40.408 "results": [ 00:13:40.408 { 00:13:40.408 "job": "raid_bdev1", 00:13:40.408 "core_mask": "0x1", 00:13:40.408 "workload": "randrw", 00:13:40.408 "percentage": 50, 00:13:40.408 "status": "finished", 00:13:40.408 "queue_depth": 2, 00:13:40.408 "io_size": 3145728, 00:13:40.408 "runtime": 8.216374, 00:13:40.408 "iops": 92.74164004705726, 00:13:40.408 "mibps": 278.22492014117176, 00:13:40.408 "io_failed": 0, 00:13:40.408 "io_timeout": 0, 00:13:40.408 "avg_latency_us": 13943.98312779542, 00:13:40.408 "min_latency_us": 276.6843894360673, 00:13:40.408 "max_latency_us": 113786.90142072692 00:13:40.408 } 00:13:40.408 ], 00:13:40.408 "core_count": 1 00:13:40.408 } 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.408 18:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:40.675 /dev/nbd0 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.675 1+0 records in 00:13:40.675 1+0 records out 00:13:40.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468318 s, 8.7 MB/s 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.675 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:40.936 /dev/nbd1 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.936 1+0 records in 00:13:40.936 1+0 records out 00:13:40.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342237 s, 12.0 MB/s 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.936 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.196 18:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:41.457 /dev/nbd1 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.457 1+0 records in 00:13:41.457 1+0 records out 00:13:41.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494164 s, 8.3 MB/s 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.457 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.717 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.978 [2024-11-28 18:54:11.500625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.978 [2024-11-28 18:54:11.500685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.978 [2024-11-28 18:54:11.500705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:41.978 [2024-11-28 18:54:11.500716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.978 [2024-11-28 18:54:11.502865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.978 [2024-11-28 18:54:11.502909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.978 [2024-11-28 18:54:11.503005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.978 [2024-11-28 18:54:11.503049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.978 [2024-11-28 18:54:11.503169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.978 [2024-11-28 18:54:11.503275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.978 spare 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.978 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.239 [2024-11-28 18:54:11.603339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.239 [2024-11-28 18:54:11.603371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.239 [2024-11-28 18:54:11.603643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:13:42.239 [2024-11-28 18:54:11.603833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.239 [2024-11-28 18:54:11.603852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.239 [2024-11-28 18:54:11.603983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.239 "name": "raid_bdev1", 00:13:42.239 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:42.239 "strip_size_kb": 0, 00:13:42.239 "state": "online", 00:13:42.239 "raid_level": "raid1", 00:13:42.239 "superblock": true, 00:13:42.239 "num_base_bdevs": 4, 00:13:42.239 "num_base_bdevs_discovered": 3, 00:13:42.239 "num_base_bdevs_operational": 3, 00:13:42.239 "base_bdevs_list": [ 00:13:42.239 { 00:13:42.239 "name": "spare", 00:13:42.239 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:42.239 "is_configured": true, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 }, 00:13:42.239 { 00:13:42.239 "name": null, 00:13:42.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.239 "is_configured": false, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 }, 00:13:42.239 { 00:13:42.239 "name": "BaseBdev3", 00:13:42.239 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:42.239 "is_configured": true, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 }, 00:13:42.239 { 00:13:42.239 "name": "BaseBdev4", 00:13:42.239 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:42.239 "is_configured": true, 00:13:42.239 "data_offset": 2048, 00:13:42.239 "data_size": 63488 00:13:42.239 } 00:13:42.239 ] 00:13:42.239 }' 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.239 18:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.499 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.759 "name": "raid_bdev1", 00:13:42.759 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:42.759 "strip_size_kb": 0, 00:13:42.759 "state": "online", 00:13:42.759 "raid_level": "raid1", 00:13:42.759 "superblock": true, 00:13:42.759 "num_base_bdevs": 4, 00:13:42.759 "num_base_bdevs_discovered": 3, 00:13:42.759 "num_base_bdevs_operational": 3, 00:13:42.759 "base_bdevs_list": [ 00:13:42.759 { 00:13:42.759 "name": "spare", 00:13:42.759 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:42.759 "is_configured": true, 00:13:42.759 "data_offset": 2048, 00:13:42.759 "data_size": 63488 00:13:42.759 }, 00:13:42.759 { 00:13:42.759 "name": null, 00:13:42.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.759 "is_configured": false, 00:13:42.759 "data_offset": 2048, 00:13:42.759 "data_size": 63488 00:13:42.759 }, 00:13:42.759 { 00:13:42.759 "name": "BaseBdev3", 00:13:42.759 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:42.759 "is_configured": true, 00:13:42.759 "data_offset": 2048, 00:13:42.759 "data_size": 63488 00:13:42.759 }, 00:13:42.759 { 00:13:42.759 "name": "BaseBdev4", 00:13:42.759 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:42.759 "is_configured": true, 00:13:42.759 "data_offset": 2048, 00:13:42.759 "data_size": 63488 00:13:42.759 } 00:13:42.759 ] 00:13:42.759 }' 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.759 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.760 [2024-11-28 18:54:12.268935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.760 "name": "raid_bdev1", 00:13:42.760 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:42.760 "strip_size_kb": 0, 00:13:42.760 "state": "online", 00:13:42.760 "raid_level": "raid1", 00:13:42.760 "superblock": true, 00:13:42.760 "num_base_bdevs": 4, 00:13:42.760 "num_base_bdevs_discovered": 2, 00:13:42.760 "num_base_bdevs_operational": 2, 00:13:42.760 "base_bdevs_list": [ 00:13:42.760 { 00:13:42.760 "name": null, 00:13:42.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.760 "is_configured": false, 00:13:42.760 "data_offset": 0, 00:13:42.760 "data_size": 63488 00:13:42.760 }, 00:13:42.760 { 00:13:42.760 "name": null, 00:13:42.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.760 "is_configured": false, 00:13:42.760 "data_offset": 2048, 00:13:42.760 "data_size": 63488 00:13:42.760 }, 00:13:42.760 { 00:13:42.760 "name": "BaseBdev3", 00:13:42.760 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:42.760 "is_configured": true, 00:13:42.760 "data_offset": 2048, 00:13:42.760 "data_size": 63488 00:13:42.760 }, 00:13:42.760 { 00:13:42.760 "name": "BaseBdev4", 00:13:42.760 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:42.760 "is_configured": true, 00:13:42.760 "data_offset": 2048, 00:13:42.760 "data_size": 63488 00:13:42.760 } 00:13:42.760 ] 00:13:42.760 }' 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.760 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.331 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 [2024-11-28 18:54:12.713288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.331 [2024-11-28 18:54:12.713483] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:43.331 [2024-11-28 18:54:12.713501] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:43.331 [2024-11-28 18:54:12.713534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.331 [2024-11-28 18:54:12.718042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:13:43.331 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 18:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:43.331 [2024-11-28 18:54:12.719907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.271 "name": "raid_bdev1", 00:13:44.271 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:44.271 "strip_size_kb": 0, 00:13:44.271 "state": "online", 00:13:44.271 "raid_level": "raid1", 00:13:44.271 "superblock": true, 00:13:44.271 "num_base_bdevs": 4, 00:13:44.271 "num_base_bdevs_discovered": 3, 00:13:44.271 "num_base_bdevs_operational": 3, 00:13:44.271 "process": { 00:13:44.271 "type": "rebuild", 00:13:44.271 "target": "spare", 00:13:44.271 "progress": { 00:13:44.271 "blocks": 20480, 00:13:44.271 "percent": 32 00:13:44.271 } 00:13:44.271 }, 00:13:44.271 "base_bdevs_list": [ 00:13:44.271 { 00:13:44.271 "name": "spare", 00:13:44.271 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:44.271 "is_configured": true, 00:13:44.271 "data_offset": 2048, 00:13:44.271 "data_size": 63488 00:13:44.271 }, 00:13:44.271 { 00:13:44.271 "name": null, 00:13:44.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.271 "is_configured": false, 00:13:44.271 "data_offset": 2048, 00:13:44.271 "data_size": 63488 00:13:44.271 }, 00:13:44.271 { 00:13:44.271 "name": "BaseBdev3", 00:13:44.271 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:44.271 "is_configured": true, 00:13:44.271 "data_offset": 2048, 00:13:44.271 "data_size": 63488 00:13:44.271 }, 00:13:44.271 { 00:13:44.271 "name": "BaseBdev4", 00:13:44.271 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:44.271 "is_configured": true, 00:13:44.271 "data_offset": 2048, 00:13:44.271 "data_size": 63488 00:13:44.271 } 00:13:44.271 ] 00:13:44.271 }' 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.271 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.271 [2024-11-28 18:54:13.874267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.530 [2024-11-28 18:54:13.926129] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.531 [2024-11-28 18:54:13.926203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.531 [2024-11-28 18:54:13.926219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.531 [2024-11-28 18:54:13.926228] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.531 "name": "raid_bdev1", 00:13:44.531 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:44.531 "strip_size_kb": 0, 00:13:44.531 "state": "online", 00:13:44.531 "raid_level": "raid1", 00:13:44.531 "superblock": true, 00:13:44.531 "num_base_bdevs": 4, 00:13:44.531 "num_base_bdevs_discovered": 2, 00:13:44.531 "num_base_bdevs_operational": 2, 00:13:44.531 "base_bdevs_list": [ 00:13:44.531 { 00:13:44.531 "name": null, 00:13:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.531 "is_configured": false, 00:13:44.531 "data_offset": 0, 00:13:44.531 "data_size": 63488 00:13:44.531 }, 00:13:44.531 { 00:13:44.531 "name": null, 00:13:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.531 "is_configured": false, 00:13:44.531 "data_offset": 2048, 00:13:44.531 "data_size": 63488 00:13:44.531 }, 00:13:44.531 { 00:13:44.531 "name": "BaseBdev3", 00:13:44.531 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:44.531 "is_configured": true, 00:13:44.531 "data_offset": 2048, 00:13:44.531 "data_size": 63488 00:13:44.531 }, 00:13:44.531 { 00:13:44.531 "name": "BaseBdev4", 00:13:44.531 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:44.531 "is_configured": true, 00:13:44.531 "data_offset": 2048, 00:13:44.531 "data_size": 63488 00:13:44.531 } 00:13:44.531 ] 00:13:44.531 }' 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.531 18:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.101 18:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.101 18:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.101 18:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.101 [2024-11-28 18:54:14.442806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.101 [2024-11-28 18:54:14.442872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.101 [2024-11-28 18:54:14.442895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:45.101 [2024-11-28 18:54:14.442906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.101 [2024-11-28 18:54:14.443340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.101 [2024-11-28 18:54:14.443370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.101 [2024-11-28 18:54:14.443468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.101 [2024-11-28 18:54:14.443489] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.101 [2024-11-28 18:54:14.443501] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.101 [2024-11-28 18:54:14.443539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.101 [2024-11-28 18:54:14.447868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:13:45.101 spare 00:13:45.101 18:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.101 18:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:45.101 [2024-11-28 18:54:14.449736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.042 "name": "raid_bdev1", 00:13:46.042 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:46.042 "strip_size_kb": 0, 00:13:46.042 "state": "online", 00:13:46.042 "raid_level": "raid1", 00:13:46.042 "superblock": true, 00:13:46.042 "num_base_bdevs": 4, 00:13:46.042 "num_base_bdevs_discovered": 3, 00:13:46.042 "num_base_bdevs_operational": 3, 00:13:46.042 "process": { 00:13:46.042 "type": "rebuild", 00:13:46.042 "target": "spare", 00:13:46.042 "progress": { 00:13:46.042 "blocks": 20480, 00:13:46.042 "percent": 32 00:13:46.042 } 00:13:46.042 }, 00:13:46.042 "base_bdevs_list": [ 00:13:46.042 { 00:13:46.042 "name": "spare", 00:13:46.042 "uuid": "57c973d3-279e-51ec-b856-473d398101b2", 00:13:46.042 "is_configured": true, 00:13:46.042 "data_offset": 2048, 00:13:46.042 "data_size": 63488 00:13:46.042 }, 00:13:46.042 { 00:13:46.042 "name": null, 00:13:46.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.042 "is_configured": false, 00:13:46.042 "data_offset": 2048, 00:13:46.042 "data_size": 63488 00:13:46.042 }, 00:13:46.042 { 00:13:46.042 "name": "BaseBdev3", 00:13:46.042 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:46.042 "is_configured": true, 00:13:46.042 "data_offset": 2048, 00:13:46.042 "data_size": 63488 00:13:46.042 }, 00:13:46.042 { 00:13:46.042 "name": "BaseBdev4", 00:13:46.042 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:46.042 "is_configured": true, 00:13:46.042 "data_offset": 2048, 00:13:46.042 "data_size": 63488 00:13:46.042 } 00:13:46.042 ] 00:13:46.042 }' 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.042 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.042 [2024-11-28 18:54:15.600056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.301 [2024-11-28 18:54:15.655965] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:46.301 [2024-11-28 18:54:15.656028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.301 [2024-11-28 18:54:15.656046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.301 [2024-11-28 18:54:15.656053] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.301 "name": "raid_bdev1", 00:13:46.301 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:46.301 "strip_size_kb": 0, 00:13:46.301 "state": "online", 00:13:46.301 "raid_level": "raid1", 00:13:46.301 "superblock": true, 00:13:46.301 "num_base_bdevs": 4, 00:13:46.301 "num_base_bdevs_discovered": 2, 00:13:46.301 "num_base_bdevs_operational": 2, 00:13:46.301 "base_bdevs_list": [ 00:13:46.301 { 00:13:46.301 "name": null, 00:13:46.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.301 "is_configured": false, 00:13:46.301 "data_offset": 0, 00:13:46.301 "data_size": 63488 00:13:46.301 }, 00:13:46.301 { 00:13:46.301 "name": null, 00:13:46.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.301 "is_configured": false, 00:13:46.301 "data_offset": 2048, 00:13:46.301 "data_size": 63488 00:13:46.301 }, 00:13:46.301 { 00:13:46.301 "name": "BaseBdev3", 00:13:46.301 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:46.301 "is_configured": true, 00:13:46.301 "data_offset": 2048, 00:13:46.301 "data_size": 63488 00:13:46.301 }, 00:13:46.301 { 00:13:46.301 "name": "BaseBdev4", 00:13:46.301 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:46.301 "is_configured": true, 00:13:46.301 "data_offset": 2048, 00:13:46.301 "data_size": 63488 00:13:46.301 } 00:13:46.301 ] 00:13:46.301 }' 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.301 18:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.560 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.819 "name": "raid_bdev1", 00:13:46.819 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:46.819 "strip_size_kb": 0, 00:13:46.819 "state": "online", 00:13:46.819 "raid_level": "raid1", 00:13:46.819 "superblock": true, 00:13:46.819 "num_base_bdevs": 4, 00:13:46.819 "num_base_bdevs_discovered": 2, 00:13:46.819 "num_base_bdevs_operational": 2, 00:13:46.819 "base_bdevs_list": [ 00:13:46.819 { 00:13:46.819 "name": null, 00:13:46.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.819 "is_configured": false, 00:13:46.819 "data_offset": 0, 00:13:46.819 "data_size": 63488 00:13:46.819 }, 00:13:46.819 { 00:13:46.819 "name": null, 00:13:46.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.819 "is_configured": false, 00:13:46.819 "data_offset": 2048, 00:13:46.819 "data_size": 63488 00:13:46.819 }, 00:13:46.819 { 00:13:46.819 "name": "BaseBdev3", 00:13:46.819 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:46.819 "is_configured": true, 00:13:46.819 "data_offset": 2048, 00:13:46.819 "data_size": 63488 00:13:46.819 }, 00:13:46.819 { 00:13:46.819 "name": "BaseBdev4", 00:13:46.819 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:46.819 "is_configured": true, 00:13:46.819 "data_offset": 2048, 00:13:46.819 "data_size": 63488 00:13:46.819 } 00:13:46.819 ] 00:13:46.819 }' 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.819 [2024-11-28 18:54:16.284582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.819 [2024-11-28 18:54:16.284650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.819 [2024-11-28 18:54:16.284672] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:46.819 [2024-11-28 18:54:16.284680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.819 [2024-11-28 18:54:16.285072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.819 [2024-11-28 18:54:16.285099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.819 [2024-11-28 18:54:16.285167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:46.819 [2024-11-28 18:54:16.285182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:46.819 [2024-11-28 18:54:16.285199] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:46.819 [2024-11-28 18:54:16.285208] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:46.819 BaseBdev1 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.819 18:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.759 "name": "raid_bdev1", 00:13:47.759 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:47.759 "strip_size_kb": 0, 00:13:47.759 "state": "online", 00:13:47.759 "raid_level": "raid1", 00:13:47.759 "superblock": true, 00:13:47.759 "num_base_bdevs": 4, 00:13:47.759 "num_base_bdevs_discovered": 2, 00:13:47.759 "num_base_bdevs_operational": 2, 00:13:47.759 "base_bdevs_list": [ 00:13:47.759 { 00:13:47.759 "name": null, 00:13:47.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.759 "is_configured": false, 00:13:47.759 "data_offset": 0, 00:13:47.759 "data_size": 63488 00:13:47.759 }, 00:13:47.759 { 00:13:47.759 "name": null, 00:13:47.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.759 "is_configured": false, 00:13:47.759 "data_offset": 2048, 00:13:47.759 "data_size": 63488 00:13:47.759 }, 00:13:47.759 { 00:13:47.759 "name": "BaseBdev3", 00:13:47.759 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:47.759 "is_configured": true, 00:13:47.759 "data_offset": 2048, 00:13:47.759 "data_size": 63488 00:13:47.759 }, 00:13:47.759 { 00:13:47.759 "name": "BaseBdev4", 00:13:47.759 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:47.759 "is_configured": true, 00:13:47.759 "data_offset": 2048, 00:13:47.759 "data_size": 63488 00:13:47.759 } 00:13:47.759 ] 00:13:47.759 }' 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.759 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.328 "name": "raid_bdev1", 00:13:48.328 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:48.328 "strip_size_kb": 0, 00:13:48.328 "state": "online", 00:13:48.328 "raid_level": "raid1", 00:13:48.328 "superblock": true, 00:13:48.328 "num_base_bdevs": 4, 00:13:48.328 "num_base_bdevs_discovered": 2, 00:13:48.328 "num_base_bdevs_operational": 2, 00:13:48.328 "base_bdevs_list": [ 00:13:48.328 { 00:13:48.328 "name": null, 00:13:48.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.328 "is_configured": false, 00:13:48.328 "data_offset": 0, 00:13:48.328 "data_size": 63488 00:13:48.328 }, 00:13:48.328 { 00:13:48.328 "name": null, 00:13:48.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.328 "is_configured": false, 00:13:48.328 "data_offset": 2048, 00:13:48.328 "data_size": 63488 00:13:48.328 }, 00:13:48.328 { 00:13:48.328 "name": "BaseBdev3", 00:13:48.328 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:48.328 "is_configured": true, 00:13:48.328 "data_offset": 2048, 00:13:48.328 "data_size": 63488 00:13:48.328 }, 00:13:48.328 { 00:13:48.328 "name": "BaseBdev4", 00:13:48.328 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:48.328 "is_configured": true, 00:13:48.328 "data_offset": 2048, 00:13:48.328 "data_size": 63488 00:13:48.328 } 00:13:48.328 ] 00:13:48.328 }' 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.328 [2024-11-28 18:54:17.913375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.328 [2024-11-28 18:54:17.913540] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:48.328 [2024-11-28 18:54:17.913558] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:48.328 request: 00:13:48.328 { 00:13:48.328 "base_bdev": "BaseBdev1", 00:13:48.328 "raid_bdev": "raid_bdev1", 00:13:48.328 "method": "bdev_raid_add_base_bdev", 00:13:48.328 "req_id": 1 00:13:48.328 } 00:13:48.328 Got JSON-RPC error response 00:13:48.328 response: 00:13:48.328 { 00:13:48.328 "code": -22, 00:13:48.328 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:48.328 } 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.328 18:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.738 "name": "raid_bdev1", 00:13:49.738 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:49.738 "strip_size_kb": 0, 00:13:49.738 "state": "online", 00:13:49.738 "raid_level": "raid1", 00:13:49.738 "superblock": true, 00:13:49.738 "num_base_bdevs": 4, 00:13:49.738 "num_base_bdevs_discovered": 2, 00:13:49.738 "num_base_bdevs_operational": 2, 00:13:49.738 "base_bdevs_list": [ 00:13:49.738 { 00:13:49.738 "name": null, 00:13:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.738 "is_configured": false, 00:13:49.738 "data_offset": 0, 00:13:49.738 "data_size": 63488 00:13:49.738 }, 00:13:49.738 { 00:13:49.738 "name": null, 00:13:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.738 "is_configured": false, 00:13:49.738 "data_offset": 2048, 00:13:49.738 "data_size": 63488 00:13:49.738 }, 00:13:49.738 { 00:13:49.738 "name": "BaseBdev3", 00:13:49.738 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:49.738 "is_configured": true, 00:13:49.738 "data_offset": 2048, 00:13:49.738 "data_size": 63488 00:13:49.738 }, 00:13:49.738 { 00:13:49.738 "name": "BaseBdev4", 00:13:49.738 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:49.738 "is_configured": true, 00:13:49.738 "data_offset": 2048, 00:13:49.738 "data_size": 63488 00:13:49.738 } 00:13:49.738 ] 00:13:49.738 }' 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.738 18:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.998 "name": "raid_bdev1", 00:13:49.998 "uuid": "3932f419-5ac1-4a31-9890-adc328c5412d", 00:13:49.998 "strip_size_kb": 0, 00:13:49.998 "state": "online", 00:13:49.998 "raid_level": "raid1", 00:13:49.998 "superblock": true, 00:13:49.998 "num_base_bdevs": 4, 00:13:49.998 "num_base_bdevs_discovered": 2, 00:13:49.998 "num_base_bdevs_operational": 2, 00:13:49.998 "base_bdevs_list": [ 00:13:49.998 { 00:13:49.998 "name": null, 00:13:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.998 "is_configured": false, 00:13:49.998 "data_offset": 0, 00:13:49.998 "data_size": 63488 00:13:49.998 }, 00:13:49.998 { 00:13:49.998 "name": null, 00:13:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.998 "is_configured": false, 00:13:49.998 "data_offset": 2048, 00:13:49.998 "data_size": 63488 00:13:49.998 }, 00:13:49.998 { 00:13:49.998 "name": "BaseBdev3", 00:13:49.998 "uuid": "4dc0160c-2217-567a-88c6-e9d94a1ada45", 00:13:49.998 "is_configured": true, 00:13:49.998 "data_offset": 2048, 00:13:49.998 "data_size": 63488 00:13:49.998 }, 00:13:49.998 { 00:13:49.998 "name": "BaseBdev4", 00:13:49.998 "uuid": "77c818d0-39be-535a-9e11-cf6c214547a7", 00:13:49.998 "is_configured": true, 00:13:49.998 "data_offset": 2048, 00:13:49.998 "data_size": 63488 00:13:49.998 } 00:13:49.998 ] 00:13:49.998 }' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91228 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91228 ']' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91228 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91228 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.998 killing process with pid 91228 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91228' 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91228 00:13:49.998 Received shutdown signal, test time was about 17.902281 seconds 00:13:49.998 00:13:49.998 Latency(us) 00:13:49.998 [2024-11-28T18:54:19.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.998 [2024-11-28T18:54:19.604Z] =================================================================================================================== 00:13:49.998 [2024-11-28T18:54:19.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:49.998 [2024-11-28 18:54:19.601697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.998 [2024-11-28 18:54:19.601828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.998 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91228 00:13:49.998 [2024-11-28 18:54:19.601899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.998 [2024-11-28 18:54:19.601911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.258 [2024-11-28 18:54:19.648085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.258 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.258 00:13:50.258 real 0m19.901s 00:13:50.258 user 0m26.525s 00:13:50.258 sys 0m2.652s 00:13:50.258 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.258 18:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.258 ************************************ 00:13:50.258 END TEST raid_rebuild_test_sb_io 00:13:50.258 ************************************ 00:13:50.518 18:54:19 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:50.518 18:54:19 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:50.518 18:54:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.518 18:54:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.518 18:54:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.518 ************************************ 00:13:50.518 START TEST raid5f_state_function_test 00:13:50.518 ************************************ 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:50.518 Process raid pid: 91933 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=91933 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91933' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 91933 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 91933 ']' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.518 18:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.518 [2024-11-28 18:54:20.046004] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:50.518 [2024-11-28 18:54:20.046149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.778 [2024-11-28 18:54:20.183241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:50.778 [2024-11-28 18:54:20.219501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.778 [2024-11-28 18:54:20.245387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.778 [2024-11-28 18:54:20.288841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.778 [2024-11-28 18:54:20.288879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.348 [2024-11-28 18:54:20.857155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.348 [2024-11-28 18:54:20.857210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.348 [2024-11-28 18:54:20.857222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.348 [2024-11-28 18:54:20.857230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.348 [2024-11-28 18:54:20.857244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.348 [2024-11-28 18:54:20.857251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.348 "name": "Existed_Raid", 00:13:51.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.348 "strip_size_kb": 64, 00:13:51.348 "state": "configuring", 00:13:51.348 "raid_level": "raid5f", 00:13:51.348 "superblock": false, 00:13:51.348 "num_base_bdevs": 3, 00:13:51.348 "num_base_bdevs_discovered": 0, 00:13:51.348 "num_base_bdevs_operational": 3, 00:13:51.348 "base_bdevs_list": [ 00:13:51.348 { 00:13:51.348 "name": "BaseBdev1", 00:13:51.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.348 "is_configured": false, 00:13:51.348 "data_offset": 0, 00:13:51.348 "data_size": 0 00:13:51.348 }, 00:13:51.348 { 00:13:51.348 "name": "BaseBdev2", 00:13:51.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.348 "is_configured": false, 00:13:51.348 "data_offset": 0, 00:13:51.348 "data_size": 0 00:13:51.348 }, 00:13:51.348 { 00:13:51.348 "name": "BaseBdev3", 00:13:51.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.348 "is_configured": false, 00:13:51.348 "data_offset": 0, 00:13:51.348 "data_size": 0 00:13:51.348 } 00:13:51.348 ] 00:13:51.348 }' 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.348 18:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.918 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.918 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.918 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.918 [2024-11-28 18:54:21.309198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.918 [2024-11-28 18:54:21.309231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:13:51.918 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.919 [2024-11-28 18:54:21.321230] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.919 [2024-11-28 18:54:21.321270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.919 [2024-11-28 18:54:21.321279] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.919 [2024-11-28 18:54:21.321286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.919 [2024-11-28 18:54:21.321294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.919 [2024-11-28 18:54:21.321302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.919 [2024-11-28 18:54:21.342222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.919 BaseBdev1 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.919 [ 00:13:51.919 { 00:13:51.919 "name": "BaseBdev1", 00:13:51.919 "aliases": [ 00:13:51.919 "3de064e6-dc80-4982-b861-99a43008a5cc" 00:13:51.919 ], 00:13:51.919 "product_name": "Malloc disk", 00:13:51.919 "block_size": 512, 00:13:51.919 "num_blocks": 65536, 00:13:51.919 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:51.919 "assigned_rate_limits": { 00:13:51.919 "rw_ios_per_sec": 0, 00:13:51.919 "rw_mbytes_per_sec": 0, 00:13:51.919 "r_mbytes_per_sec": 0, 00:13:51.919 "w_mbytes_per_sec": 0 00:13:51.919 }, 00:13:51.919 "claimed": true, 00:13:51.919 "claim_type": "exclusive_write", 00:13:51.919 "zoned": false, 00:13:51.919 "supported_io_types": { 00:13:51.919 "read": true, 00:13:51.919 "write": true, 00:13:51.919 "unmap": true, 00:13:51.919 "flush": true, 00:13:51.919 "reset": true, 00:13:51.919 "nvme_admin": false, 00:13:51.919 "nvme_io": false, 00:13:51.919 "nvme_io_md": false, 00:13:51.919 "write_zeroes": true, 00:13:51.919 "zcopy": true, 00:13:51.919 "get_zone_info": false, 00:13:51.919 "zone_management": false, 00:13:51.919 "zone_append": false, 00:13:51.919 "compare": false, 00:13:51.919 "compare_and_write": false, 00:13:51.919 "abort": true, 00:13:51.919 "seek_hole": false, 00:13:51.919 "seek_data": false, 00:13:51.919 "copy": true, 00:13:51.919 "nvme_iov_md": false 00:13:51.919 }, 00:13:51.919 "memory_domains": [ 00:13:51.919 { 00:13:51.919 "dma_device_id": "system", 00:13:51.919 "dma_device_type": 1 00:13:51.919 }, 00:13:51.919 { 00:13:51.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.919 "dma_device_type": 2 00:13:51.919 } 00:13:51.919 ], 00:13:51.919 "driver_specific": {} 00:13:51.919 } 00:13:51.919 ] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.919 "name": "Existed_Raid", 00:13:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.919 "strip_size_kb": 64, 00:13:51.919 "state": "configuring", 00:13:51.919 "raid_level": "raid5f", 00:13:51.919 "superblock": false, 00:13:51.919 "num_base_bdevs": 3, 00:13:51.919 "num_base_bdevs_discovered": 1, 00:13:51.919 "num_base_bdevs_operational": 3, 00:13:51.919 "base_bdevs_list": [ 00:13:51.919 { 00:13:51.919 "name": "BaseBdev1", 00:13:51.919 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:51.919 "is_configured": true, 00:13:51.919 "data_offset": 0, 00:13:51.919 "data_size": 65536 00:13:51.919 }, 00:13:51.919 { 00:13:51.919 "name": "BaseBdev2", 00:13:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.919 "is_configured": false, 00:13:51.919 "data_offset": 0, 00:13:51.919 "data_size": 0 00:13:51.919 }, 00:13:51.919 { 00:13:51.919 "name": "BaseBdev3", 00:13:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.919 "is_configured": false, 00:13:51.919 "data_offset": 0, 00:13:51.919 "data_size": 0 00:13:51.919 } 00:13:51.919 ] 00:13:51.919 }' 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.919 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.490 [2024-11-28 18:54:21.838368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.490 [2024-11-28 18:54:21.838417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.490 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.490 [2024-11-28 18:54:21.850418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.491 [2024-11-28 18:54:21.852248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.491 [2024-11-28 18:54:21.852287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.491 [2024-11-28 18:54:21.852315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.491 [2024-11-28 18:54:21.852323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.491 "name": "Existed_Raid", 00:13:52.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.491 "strip_size_kb": 64, 00:13:52.491 "state": "configuring", 00:13:52.491 "raid_level": "raid5f", 00:13:52.491 "superblock": false, 00:13:52.491 "num_base_bdevs": 3, 00:13:52.491 "num_base_bdevs_discovered": 1, 00:13:52.491 "num_base_bdevs_operational": 3, 00:13:52.491 "base_bdevs_list": [ 00:13:52.491 { 00:13:52.491 "name": "BaseBdev1", 00:13:52.491 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:52.491 "is_configured": true, 00:13:52.491 "data_offset": 0, 00:13:52.491 "data_size": 65536 00:13:52.491 }, 00:13:52.491 { 00:13:52.491 "name": "BaseBdev2", 00:13:52.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.491 "is_configured": false, 00:13:52.491 "data_offset": 0, 00:13:52.491 "data_size": 0 00:13:52.491 }, 00:13:52.491 { 00:13:52.491 "name": "BaseBdev3", 00:13:52.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.491 "is_configured": false, 00:13:52.491 "data_offset": 0, 00:13:52.491 "data_size": 0 00:13:52.491 } 00:13:52.491 ] 00:13:52.491 }' 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.491 18:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 [2024-11-28 18:54:22.341724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.752 BaseBdev2 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.013 [ 00:13:53.013 { 00:13:53.013 "name": "BaseBdev2", 00:13:53.013 "aliases": [ 00:13:53.013 "20064ac0-da3c-4b11-a19e-8887e1ab55de" 00:13:53.013 ], 00:13:53.013 "product_name": "Malloc disk", 00:13:53.013 "block_size": 512, 00:13:53.013 "num_blocks": 65536, 00:13:53.013 "uuid": "20064ac0-da3c-4b11-a19e-8887e1ab55de", 00:13:53.013 "assigned_rate_limits": { 00:13:53.013 "rw_ios_per_sec": 0, 00:13:53.013 "rw_mbytes_per_sec": 0, 00:13:53.013 "r_mbytes_per_sec": 0, 00:13:53.013 "w_mbytes_per_sec": 0 00:13:53.013 }, 00:13:53.013 "claimed": true, 00:13:53.013 "claim_type": "exclusive_write", 00:13:53.013 "zoned": false, 00:13:53.013 "supported_io_types": { 00:13:53.013 "read": true, 00:13:53.013 "write": true, 00:13:53.013 "unmap": true, 00:13:53.013 "flush": true, 00:13:53.013 "reset": true, 00:13:53.013 "nvme_admin": false, 00:13:53.013 "nvme_io": false, 00:13:53.013 "nvme_io_md": false, 00:13:53.013 "write_zeroes": true, 00:13:53.013 "zcopy": true, 00:13:53.013 "get_zone_info": false, 00:13:53.013 "zone_management": false, 00:13:53.013 "zone_append": false, 00:13:53.013 "compare": false, 00:13:53.013 "compare_and_write": false, 00:13:53.013 "abort": true, 00:13:53.013 "seek_hole": false, 00:13:53.013 "seek_data": false, 00:13:53.013 "copy": true, 00:13:53.013 "nvme_iov_md": false 00:13:53.013 }, 00:13:53.013 "memory_domains": [ 00:13:53.013 { 00:13:53.013 "dma_device_id": "system", 00:13:53.013 "dma_device_type": 1 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.013 "dma_device_type": 2 00:13:53.013 } 00:13:53.013 ], 00:13:53.013 "driver_specific": {} 00:13:53.013 } 00:13:53.013 ] 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.013 "name": "Existed_Raid", 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "strip_size_kb": 64, 00:13:53.013 "state": "configuring", 00:13:53.013 "raid_level": "raid5f", 00:13:53.013 "superblock": false, 00:13:53.013 "num_base_bdevs": 3, 00:13:53.013 "num_base_bdevs_discovered": 2, 00:13:53.013 "num_base_bdevs_operational": 3, 00:13:53.013 "base_bdevs_list": [ 00:13:53.013 { 00:13:53.013 "name": "BaseBdev1", 00:13:53.013 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:53.013 "is_configured": true, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 65536 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": "BaseBdev2", 00:13:53.013 "uuid": "20064ac0-da3c-4b11-a19e-8887e1ab55de", 00:13:53.013 "is_configured": true, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 65536 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": "BaseBdev3", 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "is_configured": false, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 0 00:13:53.013 } 00:13:53.013 ] 00:13:53.013 }' 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.013 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.274 [2024-11-28 18:54:22.841240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.274 [2024-11-28 18:54:22.841316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:53.274 [2024-11-28 18:54:22.841327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:53.274 [2024-11-28 18:54:22.841733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:53.274 [2024-11-28 18:54:22.842248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:53.274 [2024-11-28 18:54:22.842274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:13:53.274 [2024-11-28 18:54:22.842516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.274 BaseBdev3 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.274 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.274 [ 00:13:53.274 { 00:13:53.274 "name": "BaseBdev3", 00:13:53.274 "aliases": [ 00:13:53.274 "eee725a4-9f72-428a-99de-05aee63edaf2" 00:13:53.274 ], 00:13:53.274 "product_name": "Malloc disk", 00:13:53.274 "block_size": 512, 00:13:53.274 "num_blocks": 65536, 00:13:53.274 "uuid": "eee725a4-9f72-428a-99de-05aee63edaf2", 00:13:53.274 "assigned_rate_limits": { 00:13:53.274 "rw_ios_per_sec": 0, 00:13:53.274 "rw_mbytes_per_sec": 0, 00:13:53.274 "r_mbytes_per_sec": 0, 00:13:53.274 "w_mbytes_per_sec": 0 00:13:53.274 }, 00:13:53.274 "claimed": true, 00:13:53.274 "claim_type": "exclusive_write", 00:13:53.274 "zoned": false, 00:13:53.274 "supported_io_types": { 00:13:53.274 "read": true, 00:13:53.274 "write": true, 00:13:53.274 "unmap": true, 00:13:53.274 "flush": true, 00:13:53.274 "reset": true, 00:13:53.274 "nvme_admin": false, 00:13:53.274 "nvme_io": false, 00:13:53.274 "nvme_io_md": false, 00:13:53.274 "write_zeroes": true, 00:13:53.274 "zcopy": true, 00:13:53.274 "get_zone_info": false, 00:13:53.274 "zone_management": false, 00:13:53.274 "zone_append": false, 00:13:53.274 "compare": false, 00:13:53.274 "compare_and_write": false, 00:13:53.274 "abort": true, 00:13:53.274 "seek_hole": false, 00:13:53.274 "seek_data": false, 00:13:53.274 "copy": true, 00:13:53.274 "nvme_iov_md": false 00:13:53.274 }, 00:13:53.274 "memory_domains": [ 00:13:53.274 { 00:13:53.533 "dma_device_id": "system", 00:13:53.533 "dma_device_type": 1 00:13:53.533 }, 00:13:53.533 { 00:13:53.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.533 "dma_device_type": 2 00:13:53.533 } 00:13:53.533 ], 00:13:53.533 "driver_specific": {} 00:13:53.533 } 00:13:53.533 ] 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.534 "name": "Existed_Raid", 00:13:53.534 "uuid": "5b30c5d5-b1a4-4ae1-8e08-e6125e9ab7d0", 00:13:53.534 "strip_size_kb": 64, 00:13:53.534 "state": "online", 00:13:53.534 "raid_level": "raid5f", 00:13:53.534 "superblock": false, 00:13:53.534 "num_base_bdevs": 3, 00:13:53.534 "num_base_bdevs_discovered": 3, 00:13:53.534 "num_base_bdevs_operational": 3, 00:13:53.534 "base_bdevs_list": [ 00:13:53.534 { 00:13:53.534 "name": "BaseBdev1", 00:13:53.534 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:53.534 "is_configured": true, 00:13:53.534 "data_offset": 0, 00:13:53.534 "data_size": 65536 00:13:53.534 }, 00:13:53.534 { 00:13:53.534 "name": "BaseBdev2", 00:13:53.534 "uuid": "20064ac0-da3c-4b11-a19e-8887e1ab55de", 00:13:53.534 "is_configured": true, 00:13:53.534 "data_offset": 0, 00:13:53.534 "data_size": 65536 00:13:53.534 }, 00:13:53.534 { 00:13:53.534 "name": "BaseBdev3", 00:13:53.534 "uuid": "eee725a4-9f72-428a-99de-05aee63edaf2", 00:13:53.534 "is_configured": true, 00:13:53.534 "data_offset": 0, 00:13:53.534 "data_size": 65536 00:13:53.534 } 00:13:53.534 ] 00:13:53.534 }' 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.534 18:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.794 [2024-11-28 18:54:23.353584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.794 "name": "Existed_Raid", 00:13:53.794 "aliases": [ 00:13:53.794 "5b30c5d5-b1a4-4ae1-8e08-e6125e9ab7d0" 00:13:53.794 ], 00:13:53.794 "product_name": "Raid Volume", 00:13:53.794 "block_size": 512, 00:13:53.794 "num_blocks": 131072, 00:13:53.794 "uuid": "5b30c5d5-b1a4-4ae1-8e08-e6125e9ab7d0", 00:13:53.794 "assigned_rate_limits": { 00:13:53.794 "rw_ios_per_sec": 0, 00:13:53.794 "rw_mbytes_per_sec": 0, 00:13:53.794 "r_mbytes_per_sec": 0, 00:13:53.794 "w_mbytes_per_sec": 0 00:13:53.794 }, 00:13:53.794 "claimed": false, 00:13:53.794 "zoned": false, 00:13:53.794 "supported_io_types": { 00:13:53.794 "read": true, 00:13:53.794 "write": true, 00:13:53.794 "unmap": false, 00:13:53.794 "flush": false, 00:13:53.794 "reset": true, 00:13:53.794 "nvme_admin": false, 00:13:53.794 "nvme_io": false, 00:13:53.794 "nvme_io_md": false, 00:13:53.794 "write_zeroes": true, 00:13:53.794 "zcopy": false, 00:13:53.794 "get_zone_info": false, 00:13:53.794 "zone_management": false, 00:13:53.794 "zone_append": false, 00:13:53.794 "compare": false, 00:13:53.794 "compare_and_write": false, 00:13:53.794 "abort": false, 00:13:53.794 "seek_hole": false, 00:13:53.794 "seek_data": false, 00:13:53.794 "copy": false, 00:13:53.794 "nvme_iov_md": false 00:13:53.794 }, 00:13:53.794 "driver_specific": { 00:13:53.794 "raid": { 00:13:53.794 "uuid": "5b30c5d5-b1a4-4ae1-8e08-e6125e9ab7d0", 00:13:53.794 "strip_size_kb": 64, 00:13:53.794 "state": "online", 00:13:53.794 "raid_level": "raid5f", 00:13:53.794 "superblock": false, 00:13:53.794 "num_base_bdevs": 3, 00:13:53.794 "num_base_bdevs_discovered": 3, 00:13:53.794 "num_base_bdevs_operational": 3, 00:13:53.794 "base_bdevs_list": [ 00:13:53.794 { 00:13:53.794 "name": "BaseBdev1", 00:13:53.794 "uuid": "3de064e6-dc80-4982-b861-99a43008a5cc", 00:13:53.794 "is_configured": true, 00:13:53.794 "data_offset": 0, 00:13:53.794 "data_size": 65536 00:13:53.794 }, 00:13:53.794 { 00:13:53.794 "name": "BaseBdev2", 00:13:53.794 "uuid": "20064ac0-da3c-4b11-a19e-8887e1ab55de", 00:13:53.794 "is_configured": true, 00:13:53.794 "data_offset": 0, 00:13:53.794 "data_size": 65536 00:13:53.794 }, 00:13:53.794 { 00:13:53.794 "name": "BaseBdev3", 00:13:53.794 "uuid": "eee725a4-9f72-428a-99de-05aee63edaf2", 00:13:53.794 "is_configured": true, 00:13:53.794 "data_offset": 0, 00:13:53.794 "data_size": 65536 00:13:53.794 } 00:13:53.794 ] 00:13:53.794 } 00:13:53.794 } 00:13:53.794 }' 00:13:53.794 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:54.055 BaseBdev2 00:13:54.055 BaseBdev3' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.055 [2024-11-28 18:54:23.621518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.055 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.315 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.315 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.315 "name": "Existed_Raid", 00:13:54.315 "uuid": "5b30c5d5-b1a4-4ae1-8e08-e6125e9ab7d0", 00:13:54.315 "strip_size_kb": 64, 00:13:54.315 "state": "online", 00:13:54.315 "raid_level": "raid5f", 00:13:54.315 "superblock": false, 00:13:54.315 "num_base_bdevs": 3, 00:13:54.315 "num_base_bdevs_discovered": 2, 00:13:54.315 "num_base_bdevs_operational": 2, 00:13:54.315 "base_bdevs_list": [ 00:13:54.315 { 00:13:54.315 "name": null, 00:13:54.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.315 "is_configured": false, 00:13:54.315 "data_offset": 0, 00:13:54.315 "data_size": 65536 00:13:54.315 }, 00:13:54.315 { 00:13:54.315 "name": "BaseBdev2", 00:13:54.315 "uuid": "20064ac0-da3c-4b11-a19e-8887e1ab55de", 00:13:54.315 "is_configured": true, 00:13:54.315 "data_offset": 0, 00:13:54.315 "data_size": 65536 00:13:54.315 }, 00:13:54.315 { 00:13:54.315 "name": "BaseBdev3", 00:13:54.315 "uuid": "eee725a4-9f72-428a-99de-05aee63edaf2", 00:13:54.315 "is_configured": true, 00:13:54.315 "data_offset": 0, 00:13:54.315 "data_size": 65536 00:13:54.315 } 00:13:54.315 ] 00:13:54.315 }' 00:13:54.315 18:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.315 18:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.575 [2024-11-28 18:54:24.148880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.575 [2024-11-28 18:54:24.148966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.575 [2024-11-28 18:54:24.160205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.575 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 [2024-11-28 18:54:24.220253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.835 [2024-11-28 18:54:24.220313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 BaseBdev2 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.835 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.835 [ 00:13:54.835 { 00:13:54.835 "name": "BaseBdev2", 00:13:54.835 "aliases": [ 00:13:54.836 "4390ea53-99ca-4e12-8cbe-dd71098b7340" 00:13:54.836 ], 00:13:54.836 "product_name": "Malloc disk", 00:13:54.836 "block_size": 512, 00:13:54.836 "num_blocks": 65536, 00:13:54.836 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:54.836 "assigned_rate_limits": { 00:13:54.836 "rw_ios_per_sec": 0, 00:13:54.836 "rw_mbytes_per_sec": 0, 00:13:54.836 "r_mbytes_per_sec": 0, 00:13:54.836 "w_mbytes_per_sec": 0 00:13:54.836 }, 00:13:54.836 "claimed": false, 00:13:54.836 "zoned": false, 00:13:54.836 "supported_io_types": { 00:13:54.836 "read": true, 00:13:54.836 "write": true, 00:13:54.836 "unmap": true, 00:13:54.836 "flush": true, 00:13:54.836 "reset": true, 00:13:54.836 "nvme_admin": false, 00:13:54.836 "nvme_io": false, 00:13:54.836 "nvme_io_md": false, 00:13:54.836 "write_zeroes": true, 00:13:54.836 "zcopy": true, 00:13:54.836 "get_zone_info": false, 00:13:54.836 "zone_management": false, 00:13:54.836 "zone_append": false, 00:13:54.836 "compare": false, 00:13:54.836 "compare_and_write": false, 00:13:54.836 "abort": true, 00:13:54.836 "seek_hole": false, 00:13:54.836 "seek_data": false, 00:13:54.836 "copy": true, 00:13:54.836 "nvme_iov_md": false 00:13:54.836 }, 00:13:54.836 "memory_domains": [ 00:13:54.836 { 00:13:54.836 "dma_device_id": "system", 00:13:54.836 "dma_device_type": 1 00:13:54.836 }, 00:13:54.836 { 00:13:54.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.836 "dma_device_type": 2 00:13:54.836 } 00:13:54.836 ], 00:13:54.836 "driver_specific": {} 00:13:54.836 } 00:13:54.836 ] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 BaseBdev3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 [ 00:13:54.836 { 00:13:54.836 "name": "BaseBdev3", 00:13:54.836 "aliases": [ 00:13:54.836 "32dad6af-7dfa-4492-a9b2-3e57fae202dc" 00:13:54.836 ], 00:13:54.836 "product_name": "Malloc disk", 00:13:54.836 "block_size": 512, 00:13:54.836 "num_blocks": 65536, 00:13:54.836 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:54.836 "assigned_rate_limits": { 00:13:54.836 "rw_ios_per_sec": 0, 00:13:54.836 "rw_mbytes_per_sec": 0, 00:13:54.836 "r_mbytes_per_sec": 0, 00:13:54.836 "w_mbytes_per_sec": 0 00:13:54.836 }, 00:13:54.836 "claimed": false, 00:13:54.836 "zoned": false, 00:13:54.836 "supported_io_types": { 00:13:54.836 "read": true, 00:13:54.836 "write": true, 00:13:54.836 "unmap": true, 00:13:54.836 "flush": true, 00:13:54.836 "reset": true, 00:13:54.836 "nvme_admin": false, 00:13:54.836 "nvme_io": false, 00:13:54.836 "nvme_io_md": false, 00:13:54.836 "write_zeroes": true, 00:13:54.836 "zcopy": true, 00:13:54.836 "get_zone_info": false, 00:13:54.836 "zone_management": false, 00:13:54.836 "zone_append": false, 00:13:54.836 "compare": false, 00:13:54.836 "compare_and_write": false, 00:13:54.836 "abort": true, 00:13:54.836 "seek_hole": false, 00:13:54.836 "seek_data": false, 00:13:54.836 "copy": true, 00:13:54.836 "nvme_iov_md": false 00:13:54.836 }, 00:13:54.836 "memory_domains": [ 00:13:54.836 { 00:13:54.836 "dma_device_id": "system", 00:13:54.836 "dma_device_type": 1 00:13:54.836 }, 00:13:54.836 { 00:13:54.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.836 "dma_device_type": 2 00:13:54.836 } 00:13:54.836 ], 00:13:54.836 "driver_specific": {} 00:13:54.836 } 00:13:54.836 ] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 [2024-11-28 18:54:24.391151] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.836 [2024-11-28 18:54:24.391299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.836 [2024-11-28 18:54:24.391338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.836 [2024-11-28 18:54:24.393169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.096 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.096 "name": "Existed_Raid", 00:13:55.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.096 "strip_size_kb": 64, 00:13:55.096 "state": "configuring", 00:13:55.096 "raid_level": "raid5f", 00:13:55.096 "superblock": false, 00:13:55.096 "num_base_bdevs": 3, 00:13:55.096 "num_base_bdevs_discovered": 2, 00:13:55.096 "num_base_bdevs_operational": 3, 00:13:55.096 "base_bdevs_list": [ 00:13:55.096 { 00:13:55.096 "name": "BaseBdev1", 00:13:55.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.096 "is_configured": false, 00:13:55.096 "data_offset": 0, 00:13:55.096 "data_size": 0 00:13:55.096 }, 00:13:55.096 { 00:13:55.096 "name": "BaseBdev2", 00:13:55.096 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:55.096 "is_configured": true, 00:13:55.096 "data_offset": 0, 00:13:55.096 "data_size": 65536 00:13:55.096 }, 00:13:55.096 { 00:13:55.096 "name": "BaseBdev3", 00:13:55.096 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:55.096 "is_configured": true, 00:13:55.096 "data_offset": 0, 00:13:55.096 "data_size": 65536 00:13:55.096 } 00:13:55.096 ] 00:13:55.096 }' 00:13:55.096 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.096 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.356 [2024-11-28 18:54:24.843236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.356 "name": "Existed_Raid", 00:13:55.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.356 "strip_size_kb": 64, 00:13:55.356 "state": "configuring", 00:13:55.356 "raid_level": "raid5f", 00:13:55.356 "superblock": false, 00:13:55.356 "num_base_bdevs": 3, 00:13:55.356 "num_base_bdevs_discovered": 1, 00:13:55.356 "num_base_bdevs_operational": 3, 00:13:55.356 "base_bdevs_list": [ 00:13:55.356 { 00:13:55.356 "name": "BaseBdev1", 00:13:55.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.356 "is_configured": false, 00:13:55.356 "data_offset": 0, 00:13:55.356 "data_size": 0 00:13:55.356 }, 00:13:55.356 { 00:13:55.356 "name": null, 00:13:55.356 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:55.356 "is_configured": false, 00:13:55.356 "data_offset": 0, 00:13:55.356 "data_size": 65536 00:13:55.356 }, 00:13:55.356 { 00:13:55.356 "name": "BaseBdev3", 00:13:55.356 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:55.356 "is_configured": true, 00:13:55.356 "data_offset": 0, 00:13:55.356 "data_size": 65536 00:13:55.356 } 00:13:55.356 ] 00:13:55.356 }' 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.356 18:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.925 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.926 [2024-11-28 18:54:25.306419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.926 BaseBdev1 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.926 [ 00:13:55.926 { 00:13:55.926 "name": "BaseBdev1", 00:13:55.926 "aliases": [ 00:13:55.926 "d88c2de0-df97-4365-8d3e-b652cc1a23d2" 00:13:55.926 ], 00:13:55.926 "product_name": "Malloc disk", 00:13:55.926 "block_size": 512, 00:13:55.926 "num_blocks": 65536, 00:13:55.926 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:55.926 "assigned_rate_limits": { 00:13:55.926 "rw_ios_per_sec": 0, 00:13:55.926 "rw_mbytes_per_sec": 0, 00:13:55.926 "r_mbytes_per_sec": 0, 00:13:55.926 "w_mbytes_per_sec": 0 00:13:55.926 }, 00:13:55.926 "claimed": true, 00:13:55.926 "claim_type": "exclusive_write", 00:13:55.926 "zoned": false, 00:13:55.926 "supported_io_types": { 00:13:55.926 "read": true, 00:13:55.926 "write": true, 00:13:55.926 "unmap": true, 00:13:55.926 "flush": true, 00:13:55.926 "reset": true, 00:13:55.926 "nvme_admin": false, 00:13:55.926 "nvme_io": false, 00:13:55.926 "nvme_io_md": false, 00:13:55.926 "write_zeroes": true, 00:13:55.926 "zcopy": true, 00:13:55.926 "get_zone_info": false, 00:13:55.926 "zone_management": false, 00:13:55.926 "zone_append": false, 00:13:55.926 "compare": false, 00:13:55.926 "compare_and_write": false, 00:13:55.926 "abort": true, 00:13:55.926 "seek_hole": false, 00:13:55.926 "seek_data": false, 00:13:55.926 "copy": true, 00:13:55.926 "nvme_iov_md": false 00:13:55.926 }, 00:13:55.926 "memory_domains": [ 00:13:55.926 { 00:13:55.926 "dma_device_id": "system", 00:13:55.926 "dma_device_type": 1 00:13:55.926 }, 00:13:55.926 { 00:13:55.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.926 "dma_device_type": 2 00:13:55.926 } 00:13:55.926 ], 00:13:55.926 "driver_specific": {} 00:13:55.926 } 00:13:55.926 ] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.926 "name": "Existed_Raid", 00:13:55.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.926 "strip_size_kb": 64, 00:13:55.926 "state": "configuring", 00:13:55.926 "raid_level": "raid5f", 00:13:55.926 "superblock": false, 00:13:55.926 "num_base_bdevs": 3, 00:13:55.926 "num_base_bdevs_discovered": 2, 00:13:55.926 "num_base_bdevs_operational": 3, 00:13:55.926 "base_bdevs_list": [ 00:13:55.926 { 00:13:55.926 "name": "BaseBdev1", 00:13:55.926 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:55.926 "is_configured": true, 00:13:55.926 "data_offset": 0, 00:13:55.926 "data_size": 65536 00:13:55.926 }, 00:13:55.926 { 00:13:55.926 "name": null, 00:13:55.926 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:55.926 "is_configured": false, 00:13:55.926 "data_offset": 0, 00:13:55.926 "data_size": 65536 00:13:55.926 }, 00:13:55.926 { 00:13:55.926 "name": "BaseBdev3", 00:13:55.926 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:55.926 "is_configured": true, 00:13:55.926 "data_offset": 0, 00:13:55.926 "data_size": 65536 00:13:55.926 } 00:13:55.926 ] 00:13:55.926 }' 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.926 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.186 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.445 [2024-11-28 18:54:25.846638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.445 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.446 "name": "Existed_Raid", 00:13:56.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.446 "strip_size_kb": 64, 00:13:56.446 "state": "configuring", 00:13:56.446 "raid_level": "raid5f", 00:13:56.446 "superblock": false, 00:13:56.446 "num_base_bdevs": 3, 00:13:56.446 "num_base_bdevs_discovered": 1, 00:13:56.446 "num_base_bdevs_operational": 3, 00:13:56.446 "base_bdevs_list": [ 00:13:56.446 { 00:13:56.446 "name": "BaseBdev1", 00:13:56.446 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:56.446 "is_configured": true, 00:13:56.446 "data_offset": 0, 00:13:56.446 "data_size": 65536 00:13:56.446 }, 00:13:56.446 { 00:13:56.446 "name": null, 00:13:56.446 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:56.446 "is_configured": false, 00:13:56.446 "data_offset": 0, 00:13:56.446 "data_size": 65536 00:13:56.446 }, 00:13:56.446 { 00:13:56.446 "name": null, 00:13:56.446 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:56.446 "is_configured": false, 00:13:56.446 "data_offset": 0, 00:13:56.446 "data_size": 65536 00:13:56.446 } 00:13:56.446 ] 00:13:56.446 }' 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.446 18:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.016 [2024-11-28 18:54:26.410815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.016 "name": "Existed_Raid", 00:13:57.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.016 "strip_size_kb": 64, 00:13:57.016 "state": "configuring", 00:13:57.016 "raid_level": "raid5f", 00:13:57.016 "superblock": false, 00:13:57.016 "num_base_bdevs": 3, 00:13:57.016 "num_base_bdevs_discovered": 2, 00:13:57.016 "num_base_bdevs_operational": 3, 00:13:57.016 "base_bdevs_list": [ 00:13:57.016 { 00:13:57.016 "name": "BaseBdev1", 00:13:57.016 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:57.016 "is_configured": true, 00:13:57.016 "data_offset": 0, 00:13:57.016 "data_size": 65536 00:13:57.016 }, 00:13:57.016 { 00:13:57.016 "name": null, 00:13:57.016 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:57.016 "is_configured": false, 00:13:57.016 "data_offset": 0, 00:13:57.016 "data_size": 65536 00:13:57.016 }, 00:13:57.016 { 00:13:57.016 "name": "BaseBdev3", 00:13:57.016 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:57.016 "is_configured": true, 00:13:57.016 "data_offset": 0, 00:13:57.016 "data_size": 65536 00:13:57.016 } 00:13:57.016 ] 00:13:57.016 }' 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.016 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.586 [2024-11-28 18:54:26.942960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.586 18:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.586 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.586 "name": "Existed_Raid", 00:13:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.586 "strip_size_kb": 64, 00:13:57.586 "state": "configuring", 00:13:57.586 "raid_level": "raid5f", 00:13:57.586 "superblock": false, 00:13:57.586 "num_base_bdevs": 3, 00:13:57.586 "num_base_bdevs_discovered": 1, 00:13:57.586 "num_base_bdevs_operational": 3, 00:13:57.586 "base_bdevs_list": [ 00:13:57.586 { 00:13:57.586 "name": null, 00:13:57.587 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:57.587 "is_configured": false, 00:13:57.587 "data_offset": 0, 00:13:57.587 "data_size": 65536 00:13:57.587 }, 00:13:57.587 { 00:13:57.587 "name": null, 00:13:57.587 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:57.587 "is_configured": false, 00:13:57.587 "data_offset": 0, 00:13:57.587 "data_size": 65536 00:13:57.587 }, 00:13:57.587 { 00:13:57.587 "name": "BaseBdev3", 00:13:57.587 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:57.587 "is_configured": true, 00:13:57.587 "data_offset": 0, 00:13:57.587 "data_size": 65536 00:13:57.587 } 00:13:57.587 ] 00:13:57.587 }' 00:13:57.587 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.587 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.846 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.846 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.846 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.846 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.846 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.106 [2024-11-28 18:54:27.457662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.106 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.107 "name": "Existed_Raid", 00:13:58.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.107 "strip_size_kb": 64, 00:13:58.107 "state": "configuring", 00:13:58.107 "raid_level": "raid5f", 00:13:58.107 "superblock": false, 00:13:58.107 "num_base_bdevs": 3, 00:13:58.107 "num_base_bdevs_discovered": 2, 00:13:58.107 "num_base_bdevs_operational": 3, 00:13:58.107 "base_bdevs_list": [ 00:13:58.107 { 00:13:58.107 "name": null, 00:13:58.107 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:58.107 "is_configured": false, 00:13:58.107 "data_offset": 0, 00:13:58.107 "data_size": 65536 00:13:58.107 }, 00:13:58.107 { 00:13:58.107 "name": "BaseBdev2", 00:13:58.107 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:58.107 "is_configured": true, 00:13:58.107 "data_offset": 0, 00:13:58.107 "data_size": 65536 00:13:58.107 }, 00:13:58.107 { 00:13:58.107 "name": "BaseBdev3", 00:13:58.107 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:58.107 "is_configured": true, 00:13:58.107 "data_offset": 0, 00:13:58.107 "data_size": 65536 00:13:58.107 } 00:13:58.107 ] 00:13:58.107 }' 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.107 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:58.367 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.627 18:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d88c2de0-df97-4365-8d3e-b652cc1a23d2 00:13:58.627 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.627 18:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.627 [2024-11-28 18:54:28.012406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:58.627 [2024-11-28 18:54:28.012532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:58.627 [2024-11-28 18:54:28.012558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:58.627 [2024-11-28 18:54:28.012865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:58.627 [2024-11-28 18:54:28.013317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:58.627 [2024-11-28 18:54:28.013371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:58.627 [2024-11-28 18:54:28.013591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.627 NewBaseBdev 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.627 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.628 [ 00:13:58.628 { 00:13:58.628 "name": "NewBaseBdev", 00:13:58.628 "aliases": [ 00:13:58.628 "d88c2de0-df97-4365-8d3e-b652cc1a23d2" 00:13:58.628 ], 00:13:58.628 "product_name": "Malloc disk", 00:13:58.628 "block_size": 512, 00:13:58.628 "num_blocks": 65536, 00:13:58.628 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:58.628 "assigned_rate_limits": { 00:13:58.628 "rw_ios_per_sec": 0, 00:13:58.628 "rw_mbytes_per_sec": 0, 00:13:58.628 "r_mbytes_per_sec": 0, 00:13:58.628 "w_mbytes_per_sec": 0 00:13:58.628 }, 00:13:58.628 "claimed": true, 00:13:58.628 "claim_type": "exclusive_write", 00:13:58.628 "zoned": false, 00:13:58.628 "supported_io_types": { 00:13:58.628 "read": true, 00:13:58.628 "write": true, 00:13:58.628 "unmap": true, 00:13:58.628 "flush": true, 00:13:58.628 "reset": true, 00:13:58.628 "nvme_admin": false, 00:13:58.628 "nvme_io": false, 00:13:58.628 "nvme_io_md": false, 00:13:58.628 "write_zeroes": true, 00:13:58.628 "zcopy": true, 00:13:58.628 "get_zone_info": false, 00:13:58.628 "zone_management": false, 00:13:58.628 "zone_append": false, 00:13:58.628 "compare": false, 00:13:58.628 "compare_and_write": false, 00:13:58.628 "abort": true, 00:13:58.628 "seek_hole": false, 00:13:58.628 "seek_data": false, 00:13:58.628 "copy": true, 00:13:58.628 "nvme_iov_md": false 00:13:58.628 }, 00:13:58.628 "memory_domains": [ 00:13:58.628 { 00:13:58.628 "dma_device_id": "system", 00:13:58.628 "dma_device_type": 1 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.628 "dma_device_type": 2 00:13:58.628 } 00:13:58.628 ], 00:13:58.628 "driver_specific": {} 00:13:58.628 } 00:13:58.628 ] 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.628 "name": "Existed_Raid", 00:13:58.628 "uuid": "6b8585c2-b7e1-4634-a38c-8f3ebb8136bc", 00:13:58.628 "strip_size_kb": 64, 00:13:58.628 "state": "online", 00:13:58.628 "raid_level": "raid5f", 00:13:58.628 "superblock": false, 00:13:58.628 "num_base_bdevs": 3, 00:13:58.628 "num_base_bdevs_discovered": 3, 00:13:58.628 "num_base_bdevs_operational": 3, 00:13:58.628 "base_bdevs_list": [ 00:13:58.628 { 00:13:58.628 "name": "NewBaseBdev", 00:13:58.628 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 0, 00:13:58.628 "data_size": 65536 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "name": "BaseBdev2", 00:13:58.628 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 0, 00:13:58.628 "data_size": 65536 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "name": "BaseBdev3", 00:13:58.628 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 0, 00:13:58.628 "data_size": 65536 00:13:58.628 } 00:13:58.628 ] 00:13:58.628 }' 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.628 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.888 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.888 [2024-11-28 18:54:28.480758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.148 "name": "Existed_Raid", 00:13:59.148 "aliases": [ 00:13:59.148 "6b8585c2-b7e1-4634-a38c-8f3ebb8136bc" 00:13:59.148 ], 00:13:59.148 "product_name": "Raid Volume", 00:13:59.148 "block_size": 512, 00:13:59.148 "num_blocks": 131072, 00:13:59.148 "uuid": "6b8585c2-b7e1-4634-a38c-8f3ebb8136bc", 00:13:59.148 "assigned_rate_limits": { 00:13:59.148 "rw_ios_per_sec": 0, 00:13:59.148 "rw_mbytes_per_sec": 0, 00:13:59.148 "r_mbytes_per_sec": 0, 00:13:59.148 "w_mbytes_per_sec": 0 00:13:59.148 }, 00:13:59.148 "claimed": false, 00:13:59.148 "zoned": false, 00:13:59.148 "supported_io_types": { 00:13:59.148 "read": true, 00:13:59.148 "write": true, 00:13:59.148 "unmap": false, 00:13:59.148 "flush": false, 00:13:59.148 "reset": true, 00:13:59.148 "nvme_admin": false, 00:13:59.148 "nvme_io": false, 00:13:59.148 "nvme_io_md": false, 00:13:59.148 "write_zeroes": true, 00:13:59.148 "zcopy": false, 00:13:59.148 "get_zone_info": false, 00:13:59.148 "zone_management": false, 00:13:59.148 "zone_append": false, 00:13:59.148 "compare": false, 00:13:59.148 "compare_and_write": false, 00:13:59.148 "abort": false, 00:13:59.148 "seek_hole": false, 00:13:59.148 "seek_data": false, 00:13:59.148 "copy": false, 00:13:59.148 "nvme_iov_md": false 00:13:59.148 }, 00:13:59.148 "driver_specific": { 00:13:59.148 "raid": { 00:13:59.148 "uuid": "6b8585c2-b7e1-4634-a38c-8f3ebb8136bc", 00:13:59.148 "strip_size_kb": 64, 00:13:59.148 "state": "online", 00:13:59.148 "raid_level": "raid5f", 00:13:59.148 "superblock": false, 00:13:59.148 "num_base_bdevs": 3, 00:13:59.148 "num_base_bdevs_discovered": 3, 00:13:59.148 "num_base_bdevs_operational": 3, 00:13:59.148 "base_bdevs_list": [ 00:13:59.148 { 00:13:59.148 "name": "NewBaseBdev", 00:13:59.148 "uuid": "d88c2de0-df97-4365-8d3e-b652cc1a23d2", 00:13:59.148 "is_configured": true, 00:13:59.148 "data_offset": 0, 00:13:59.148 "data_size": 65536 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "name": "BaseBdev2", 00:13:59.148 "uuid": "4390ea53-99ca-4e12-8cbe-dd71098b7340", 00:13:59.148 "is_configured": true, 00:13:59.148 "data_offset": 0, 00:13:59.148 "data_size": 65536 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "name": "BaseBdev3", 00:13:59.148 "uuid": "32dad6af-7dfa-4492-a9b2-3e57fae202dc", 00:13:59.148 "is_configured": true, 00:13:59.148 "data_offset": 0, 00:13:59.148 "data_size": 65536 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 } 00:13:59.148 } 00:13:59.148 }' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:59.148 BaseBdev2 00:13:59.148 BaseBdev3' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.148 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 [2024-11-28 18:54:28.732642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.149 [2024-11-28 18:54:28.732717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.149 [2024-11-28 18:54:28.732795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.149 [2024-11-28 18:54:28.733030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.149 [2024-11-28 18:54:28.733040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 91933 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 91933 ']' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 91933 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.149 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91933 00:13:59.409 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.409 killing process with pid 91933 00:13:59.409 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.409 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91933' 00:13:59.409 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 91933 00:13:59.409 [2024-11-28 18:54:28.781701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.409 18:54:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 91933 00:13:59.409 [2024-11-28 18:54:28.812032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:59.670 00:13:59.670 real 0m9.088s 00:13:59.670 user 0m15.487s 00:13:59.670 sys 0m1.933s 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.670 ************************************ 00:13:59.670 END TEST raid5f_state_function_test 00:13:59.670 ************************************ 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 18:54:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:59.670 18:54:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:59.670 18:54:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.670 18:54:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 ************************************ 00:13:59.670 START TEST raid5f_state_function_test_sb 00:13:59.670 ************************************ 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:59.670 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92538 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92538' 00:13:59.671 Process raid pid: 92538 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92538 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92538 ']' 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.671 18:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.671 [2024-11-28 18:54:29.213961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:59.671 [2024-11-28 18:54:29.214123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.931 [2024-11-28 18:54:29.349376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.931 [2024-11-28 18:54:29.383114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.931 [2024-11-28 18:54:29.410487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.931 [2024-11-28 18:54:29.454439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.931 [2024-11-28 18:54:29.454541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.501 [2024-11-28 18:54:30.030753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.501 [2024-11-28 18:54:30.030883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.501 [2024-11-28 18:54:30.030916] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.501 [2024-11-28 18:54:30.030936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.501 [2024-11-28 18:54:30.030961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.501 [2024-11-28 18:54:30.030979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.501 "name": "Existed_Raid", 00:14:00.501 "uuid": "ad91dc28-28aa-4d48-877a-85602f4c690a", 00:14:00.501 "strip_size_kb": 64, 00:14:00.501 "state": "configuring", 00:14:00.501 "raid_level": "raid5f", 00:14:00.501 "superblock": true, 00:14:00.501 "num_base_bdevs": 3, 00:14:00.501 "num_base_bdevs_discovered": 0, 00:14:00.501 "num_base_bdevs_operational": 3, 00:14:00.501 "base_bdevs_list": [ 00:14:00.501 { 00:14:00.501 "name": "BaseBdev1", 00:14:00.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.501 "is_configured": false, 00:14:00.501 "data_offset": 0, 00:14:00.501 "data_size": 0 00:14:00.501 }, 00:14:00.501 { 00:14:00.501 "name": "BaseBdev2", 00:14:00.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.501 "is_configured": false, 00:14:00.501 "data_offset": 0, 00:14:00.501 "data_size": 0 00:14:00.501 }, 00:14:00.501 { 00:14:00.501 "name": "BaseBdev3", 00:14:00.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.501 "is_configured": false, 00:14:00.501 "data_offset": 0, 00:14:00.501 "data_size": 0 00:14:00.501 } 00:14:00.501 ] 00:14:00.501 }' 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.501 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.071 [2024-11-28 18:54:30.506755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.071 [2024-11-28 18:54:30.506840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.071 [2024-11-28 18:54:30.518809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.071 [2024-11-28 18:54:30.518850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.071 [2024-11-28 18:54:30.518861] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.071 [2024-11-28 18:54:30.518869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.071 [2024-11-28 18:54:30.518877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.071 [2024-11-28 18:54:30.518886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.071 [2024-11-28 18:54:30.539826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.071 BaseBdev1 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.071 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.072 [ 00:14:01.072 { 00:14:01.072 "name": "BaseBdev1", 00:14:01.072 "aliases": [ 00:14:01.072 "7d10a99e-be8a-48af-a59a-5c834c51fa02" 00:14:01.072 ], 00:14:01.072 "product_name": "Malloc disk", 00:14:01.072 "block_size": 512, 00:14:01.072 "num_blocks": 65536, 00:14:01.072 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:01.072 "assigned_rate_limits": { 00:14:01.072 "rw_ios_per_sec": 0, 00:14:01.072 "rw_mbytes_per_sec": 0, 00:14:01.072 "r_mbytes_per_sec": 0, 00:14:01.072 "w_mbytes_per_sec": 0 00:14:01.072 }, 00:14:01.072 "claimed": true, 00:14:01.072 "claim_type": "exclusive_write", 00:14:01.072 "zoned": false, 00:14:01.072 "supported_io_types": { 00:14:01.072 "read": true, 00:14:01.072 "write": true, 00:14:01.072 "unmap": true, 00:14:01.072 "flush": true, 00:14:01.072 "reset": true, 00:14:01.072 "nvme_admin": false, 00:14:01.072 "nvme_io": false, 00:14:01.072 "nvme_io_md": false, 00:14:01.072 "write_zeroes": true, 00:14:01.072 "zcopy": true, 00:14:01.072 "get_zone_info": false, 00:14:01.072 "zone_management": false, 00:14:01.072 "zone_append": false, 00:14:01.072 "compare": false, 00:14:01.072 "compare_and_write": false, 00:14:01.072 "abort": true, 00:14:01.072 "seek_hole": false, 00:14:01.072 "seek_data": false, 00:14:01.072 "copy": true, 00:14:01.072 "nvme_iov_md": false 00:14:01.072 }, 00:14:01.072 "memory_domains": [ 00:14:01.072 { 00:14:01.072 "dma_device_id": "system", 00:14:01.072 "dma_device_type": 1 00:14:01.072 }, 00:14:01.072 { 00:14:01.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.072 "dma_device_type": 2 00:14:01.072 } 00:14:01.072 ], 00:14:01.072 "driver_specific": {} 00:14:01.072 } 00:14:01.072 ] 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.072 "name": "Existed_Raid", 00:14:01.072 "uuid": "19fdb70a-a18e-4e90-b0cd-59c16c11ce70", 00:14:01.072 "strip_size_kb": 64, 00:14:01.072 "state": "configuring", 00:14:01.072 "raid_level": "raid5f", 00:14:01.072 "superblock": true, 00:14:01.072 "num_base_bdevs": 3, 00:14:01.072 "num_base_bdevs_discovered": 1, 00:14:01.072 "num_base_bdevs_operational": 3, 00:14:01.072 "base_bdevs_list": [ 00:14:01.072 { 00:14:01.072 "name": "BaseBdev1", 00:14:01.072 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:01.072 "is_configured": true, 00:14:01.072 "data_offset": 2048, 00:14:01.072 "data_size": 63488 00:14:01.072 }, 00:14:01.072 { 00:14:01.072 "name": "BaseBdev2", 00:14:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.072 "is_configured": false, 00:14:01.072 "data_offset": 0, 00:14:01.072 "data_size": 0 00:14:01.072 }, 00:14:01.072 { 00:14:01.072 "name": "BaseBdev3", 00:14:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.072 "is_configured": false, 00:14:01.072 "data_offset": 0, 00:14:01.072 "data_size": 0 00:14:01.072 } 00:14:01.072 ] 00:14:01.072 }' 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.072 18:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.642 [2024-11-28 18:54:31.020038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.642 [2024-11-28 18:54:31.020084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.642 [2024-11-28 18:54:31.032066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.642 [2024-11-28 18:54:31.033962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.642 [2024-11-28 18:54:31.034048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.642 [2024-11-28 18:54:31.034080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.642 [2024-11-28 18:54:31.034101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.642 "name": "Existed_Raid", 00:14:01.642 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:01.642 "strip_size_kb": 64, 00:14:01.642 "state": "configuring", 00:14:01.642 "raid_level": "raid5f", 00:14:01.642 "superblock": true, 00:14:01.642 "num_base_bdevs": 3, 00:14:01.642 "num_base_bdevs_discovered": 1, 00:14:01.642 "num_base_bdevs_operational": 3, 00:14:01.642 "base_bdevs_list": [ 00:14:01.642 { 00:14:01.642 "name": "BaseBdev1", 00:14:01.642 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:01.642 "is_configured": true, 00:14:01.642 "data_offset": 2048, 00:14:01.642 "data_size": 63488 00:14:01.642 }, 00:14:01.642 { 00:14:01.642 "name": "BaseBdev2", 00:14:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.642 "is_configured": false, 00:14:01.642 "data_offset": 0, 00:14:01.642 "data_size": 0 00:14:01.642 }, 00:14:01.642 { 00:14:01.642 "name": "BaseBdev3", 00:14:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.642 "is_configured": false, 00:14:01.642 "data_offset": 0, 00:14:01.642 "data_size": 0 00:14:01.642 } 00:14:01.642 ] 00:14:01.642 }' 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.642 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 [2024-11-28 18:54:31.463292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.902 BaseBdev2 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.902 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 [ 00:14:01.902 { 00:14:01.902 "name": "BaseBdev2", 00:14:01.902 "aliases": [ 00:14:01.902 "548aa338-2fc3-41c3-8eec-e9e633518109" 00:14:01.902 ], 00:14:01.902 "product_name": "Malloc disk", 00:14:01.902 "block_size": 512, 00:14:01.902 "num_blocks": 65536, 00:14:01.902 "uuid": "548aa338-2fc3-41c3-8eec-e9e633518109", 00:14:01.902 "assigned_rate_limits": { 00:14:01.902 "rw_ios_per_sec": 0, 00:14:01.902 "rw_mbytes_per_sec": 0, 00:14:01.902 "r_mbytes_per_sec": 0, 00:14:01.902 "w_mbytes_per_sec": 0 00:14:01.902 }, 00:14:01.902 "claimed": true, 00:14:01.902 "claim_type": "exclusive_write", 00:14:01.902 "zoned": false, 00:14:01.902 "supported_io_types": { 00:14:01.902 "read": true, 00:14:01.902 "write": true, 00:14:01.902 "unmap": true, 00:14:01.902 "flush": true, 00:14:01.902 "reset": true, 00:14:01.902 "nvme_admin": false, 00:14:01.902 "nvme_io": false, 00:14:01.902 "nvme_io_md": false, 00:14:01.902 "write_zeroes": true, 00:14:01.902 "zcopy": true, 00:14:01.902 "get_zone_info": false, 00:14:01.903 "zone_management": false, 00:14:01.903 "zone_append": false, 00:14:01.903 "compare": false, 00:14:01.903 "compare_and_write": false, 00:14:01.903 "abort": true, 00:14:01.903 "seek_hole": false, 00:14:01.903 "seek_data": false, 00:14:01.903 "copy": true, 00:14:01.903 "nvme_iov_md": false 00:14:01.903 }, 00:14:01.903 "memory_domains": [ 00:14:01.903 { 00:14:01.903 "dma_device_id": "system", 00:14:01.903 "dma_device_type": 1 00:14:01.903 }, 00:14:01.903 { 00:14:01.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.903 "dma_device_type": 2 00:14:01.903 } 00:14:01.903 ], 00:14:01.903 "driver_specific": {} 00:14:01.903 } 00:14:01.903 ] 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.903 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.163 "name": "Existed_Raid", 00:14:02.163 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:02.163 "strip_size_kb": 64, 00:14:02.163 "state": "configuring", 00:14:02.163 "raid_level": "raid5f", 00:14:02.163 "superblock": true, 00:14:02.163 "num_base_bdevs": 3, 00:14:02.163 "num_base_bdevs_discovered": 2, 00:14:02.163 "num_base_bdevs_operational": 3, 00:14:02.163 "base_bdevs_list": [ 00:14:02.163 { 00:14:02.163 "name": "BaseBdev1", 00:14:02.163 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:02.163 "is_configured": true, 00:14:02.163 "data_offset": 2048, 00:14:02.163 "data_size": 63488 00:14:02.163 }, 00:14:02.163 { 00:14:02.163 "name": "BaseBdev2", 00:14:02.163 "uuid": "548aa338-2fc3-41c3-8eec-e9e633518109", 00:14:02.163 "is_configured": true, 00:14:02.163 "data_offset": 2048, 00:14:02.163 "data_size": 63488 00:14:02.163 }, 00:14:02.163 { 00:14:02.163 "name": "BaseBdev3", 00:14:02.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.163 "is_configured": false, 00:14:02.163 "data_offset": 0, 00:14:02.163 "data_size": 0 00:14:02.163 } 00:14:02.163 ] 00:14:02.163 }' 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.163 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.423 [2024-11-28 18:54:31.990880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.423 [2024-11-28 18:54:31.991531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.423 [2024-11-28 18:54:31.991586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.423 BaseBdev3 00:14:02.423 [2024-11-28 18:54:31.992648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:02.423 [2024-11-28 18:54:31.994051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.423 [2024-11-28 18:54:31.994108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.423 [2024-11-28 18:54:31.994508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.423 18:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.423 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.423 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.423 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.423 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.423 [ 00:14:02.423 { 00:14:02.423 "name": "BaseBdev3", 00:14:02.423 "aliases": [ 00:14:02.423 "3de51b3c-fd57-4535-93b7-da811f0d9d55" 00:14:02.423 ], 00:14:02.423 "product_name": "Malloc disk", 00:14:02.423 "block_size": 512, 00:14:02.423 "num_blocks": 65536, 00:14:02.423 "uuid": "3de51b3c-fd57-4535-93b7-da811f0d9d55", 00:14:02.423 "assigned_rate_limits": { 00:14:02.423 "rw_ios_per_sec": 0, 00:14:02.423 "rw_mbytes_per_sec": 0, 00:14:02.423 "r_mbytes_per_sec": 0, 00:14:02.423 "w_mbytes_per_sec": 0 00:14:02.423 }, 00:14:02.423 "claimed": true, 00:14:02.423 "claim_type": "exclusive_write", 00:14:02.423 "zoned": false, 00:14:02.423 "supported_io_types": { 00:14:02.423 "read": true, 00:14:02.423 "write": true, 00:14:02.423 "unmap": true, 00:14:02.423 "flush": true, 00:14:02.423 "reset": true, 00:14:02.423 "nvme_admin": false, 00:14:02.423 "nvme_io": false, 00:14:02.423 "nvme_io_md": false, 00:14:02.423 "write_zeroes": true, 00:14:02.423 "zcopy": true, 00:14:02.423 "get_zone_info": false, 00:14:02.423 "zone_management": false, 00:14:02.423 "zone_append": false, 00:14:02.423 "compare": false, 00:14:02.423 "compare_and_write": false, 00:14:02.423 "abort": true, 00:14:02.423 "seek_hole": false, 00:14:02.683 "seek_data": false, 00:14:02.683 "copy": true, 00:14:02.683 "nvme_iov_md": false 00:14:02.683 }, 00:14:02.683 "memory_domains": [ 00:14:02.683 { 00:14:02.683 "dma_device_id": "system", 00:14:02.683 "dma_device_type": 1 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.683 "dma_device_type": 2 00:14:02.683 } 00:14:02.683 ], 00:14:02.683 "driver_specific": {} 00:14:02.683 } 00:14:02.683 ] 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.683 "name": "Existed_Raid", 00:14:02.683 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:02.683 "strip_size_kb": 64, 00:14:02.683 "state": "online", 00:14:02.683 "raid_level": "raid5f", 00:14:02.683 "superblock": true, 00:14:02.683 "num_base_bdevs": 3, 00:14:02.683 "num_base_bdevs_discovered": 3, 00:14:02.683 "num_base_bdevs_operational": 3, 00:14:02.683 "base_bdevs_list": [ 00:14:02.683 { 00:14:02.683 "name": "BaseBdev1", 00:14:02.683 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 2048, 00:14:02.683 "data_size": 63488 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "name": "BaseBdev2", 00:14:02.683 "uuid": "548aa338-2fc3-41c3-8eec-e9e633518109", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 2048, 00:14:02.683 "data_size": 63488 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "name": "BaseBdev3", 00:14:02.683 "uuid": "3de51b3c-fd57-4535-93b7-da811f0d9d55", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 2048, 00:14:02.683 "data_size": 63488 00:14:02.683 } 00:14:02.683 ] 00:14:02.683 }' 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.683 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.943 [2024-11-28 18:54:32.493397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.943 "name": "Existed_Raid", 00:14:02.943 "aliases": [ 00:14:02.943 "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f" 00:14:02.943 ], 00:14:02.943 "product_name": "Raid Volume", 00:14:02.943 "block_size": 512, 00:14:02.943 "num_blocks": 126976, 00:14:02.943 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:02.943 "assigned_rate_limits": { 00:14:02.943 "rw_ios_per_sec": 0, 00:14:02.943 "rw_mbytes_per_sec": 0, 00:14:02.943 "r_mbytes_per_sec": 0, 00:14:02.943 "w_mbytes_per_sec": 0 00:14:02.943 }, 00:14:02.943 "claimed": false, 00:14:02.943 "zoned": false, 00:14:02.943 "supported_io_types": { 00:14:02.943 "read": true, 00:14:02.943 "write": true, 00:14:02.943 "unmap": false, 00:14:02.943 "flush": false, 00:14:02.943 "reset": true, 00:14:02.943 "nvme_admin": false, 00:14:02.943 "nvme_io": false, 00:14:02.943 "nvme_io_md": false, 00:14:02.943 "write_zeroes": true, 00:14:02.943 "zcopy": false, 00:14:02.943 "get_zone_info": false, 00:14:02.943 "zone_management": false, 00:14:02.943 "zone_append": false, 00:14:02.943 "compare": false, 00:14:02.943 "compare_and_write": false, 00:14:02.943 "abort": false, 00:14:02.943 "seek_hole": false, 00:14:02.943 "seek_data": false, 00:14:02.943 "copy": false, 00:14:02.943 "nvme_iov_md": false 00:14:02.943 }, 00:14:02.943 "driver_specific": { 00:14:02.943 "raid": { 00:14:02.943 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:02.943 "strip_size_kb": 64, 00:14:02.943 "state": "online", 00:14:02.943 "raid_level": "raid5f", 00:14:02.943 "superblock": true, 00:14:02.943 "num_base_bdevs": 3, 00:14:02.943 "num_base_bdevs_discovered": 3, 00:14:02.943 "num_base_bdevs_operational": 3, 00:14:02.943 "base_bdevs_list": [ 00:14:02.943 { 00:14:02.943 "name": "BaseBdev1", 00:14:02.943 "uuid": "7d10a99e-be8a-48af-a59a-5c834c51fa02", 00:14:02.943 "is_configured": true, 00:14:02.943 "data_offset": 2048, 00:14:02.943 "data_size": 63488 00:14:02.943 }, 00:14:02.943 { 00:14:02.943 "name": "BaseBdev2", 00:14:02.943 "uuid": "548aa338-2fc3-41c3-8eec-e9e633518109", 00:14:02.943 "is_configured": true, 00:14:02.943 "data_offset": 2048, 00:14:02.943 "data_size": 63488 00:14:02.943 }, 00:14:02.943 { 00:14:02.943 "name": "BaseBdev3", 00:14:02.943 "uuid": "3de51b3c-fd57-4535-93b7-da811f0d9d55", 00:14:02.943 "is_configured": true, 00:14:02.943 "data_offset": 2048, 00:14:02.943 "data_size": 63488 00:14:02.943 } 00:14:02.943 ] 00:14:02.943 } 00:14:02.943 } 00:14:02.943 }' 00:14:02.943 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:03.203 BaseBdev2 00:14:03.203 BaseBdev3' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 [2024-11-28 18:54:32.745332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.462 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.462 "name": "Existed_Raid", 00:14:03.462 "uuid": "94e375e6-88f5-46b4-8bc2-0e0d6e6ebb6f", 00:14:03.462 "strip_size_kb": 64, 00:14:03.462 "state": "online", 00:14:03.462 "raid_level": "raid5f", 00:14:03.462 "superblock": true, 00:14:03.462 "num_base_bdevs": 3, 00:14:03.462 "num_base_bdevs_discovered": 2, 00:14:03.462 "num_base_bdevs_operational": 2, 00:14:03.462 "base_bdevs_list": [ 00:14:03.462 { 00:14:03.462 "name": null, 00:14:03.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.462 "is_configured": false, 00:14:03.462 "data_offset": 0, 00:14:03.462 "data_size": 63488 00:14:03.462 }, 00:14:03.462 { 00:14:03.462 "name": "BaseBdev2", 00:14:03.462 "uuid": "548aa338-2fc3-41c3-8eec-e9e633518109", 00:14:03.463 "is_configured": true, 00:14:03.463 "data_offset": 2048, 00:14:03.463 "data_size": 63488 00:14:03.463 }, 00:14:03.463 { 00:14:03.463 "name": "BaseBdev3", 00:14:03.463 "uuid": "3de51b3c-fd57-4535-93b7-da811f0d9d55", 00:14:03.463 "is_configured": true, 00:14:03.463 "data_offset": 2048, 00:14:03.463 "data_size": 63488 00:14:03.463 } 00:14:03.463 ] 00:14:03.463 }' 00:14:03.463 18:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.463 18:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.722 [2024-11-28 18:54:33.280880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.722 [2024-11-28 18:54:33.281001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.722 [2024-11-28 18:54:33.292155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.722 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.983 [2024-11-28 18:54:33.352212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:03.983 [2024-11-28 18:54:33.352324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.983 BaseBdev2 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.983 [ 00:14:03.983 { 00:14:03.983 "name": "BaseBdev2", 00:14:03.983 "aliases": [ 00:14:03.983 "93cef4f9-2f8e-4167-ae28-cd013d66b2aa" 00:14:03.983 ], 00:14:03.983 "product_name": "Malloc disk", 00:14:03.983 "block_size": 512, 00:14:03.983 "num_blocks": 65536, 00:14:03.983 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:03.983 "assigned_rate_limits": { 00:14:03.983 "rw_ios_per_sec": 0, 00:14:03.983 "rw_mbytes_per_sec": 0, 00:14:03.983 "r_mbytes_per_sec": 0, 00:14:03.983 "w_mbytes_per_sec": 0 00:14:03.983 }, 00:14:03.983 "claimed": false, 00:14:03.983 "zoned": false, 00:14:03.983 "supported_io_types": { 00:14:03.983 "read": true, 00:14:03.983 "write": true, 00:14:03.983 "unmap": true, 00:14:03.983 "flush": true, 00:14:03.983 "reset": true, 00:14:03.983 "nvme_admin": false, 00:14:03.983 "nvme_io": false, 00:14:03.983 "nvme_io_md": false, 00:14:03.983 "write_zeroes": true, 00:14:03.983 "zcopy": true, 00:14:03.983 "get_zone_info": false, 00:14:03.983 "zone_management": false, 00:14:03.983 "zone_append": false, 00:14:03.983 "compare": false, 00:14:03.983 "compare_and_write": false, 00:14:03.983 "abort": true, 00:14:03.983 "seek_hole": false, 00:14:03.983 "seek_data": false, 00:14:03.983 "copy": true, 00:14:03.983 "nvme_iov_md": false 00:14:03.983 }, 00:14:03.983 "memory_domains": [ 00:14:03.983 { 00:14:03.983 "dma_device_id": "system", 00:14:03.983 "dma_device_type": 1 00:14:03.983 }, 00:14:03.983 { 00:14:03.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.983 "dma_device_type": 2 00:14:03.983 } 00:14:03.983 ], 00:14:03.983 "driver_specific": {} 00:14:03.983 } 00:14:03.983 ] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.983 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 BaseBdev3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 [ 00:14:03.984 { 00:14:03.984 "name": "BaseBdev3", 00:14:03.984 "aliases": [ 00:14:03.984 "ba698d9f-3b8f-4ec2-8341-989458d097ae" 00:14:03.984 ], 00:14:03.984 "product_name": "Malloc disk", 00:14:03.984 "block_size": 512, 00:14:03.984 "num_blocks": 65536, 00:14:03.984 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:03.984 "assigned_rate_limits": { 00:14:03.984 "rw_ios_per_sec": 0, 00:14:03.984 "rw_mbytes_per_sec": 0, 00:14:03.984 "r_mbytes_per_sec": 0, 00:14:03.984 "w_mbytes_per_sec": 0 00:14:03.984 }, 00:14:03.984 "claimed": false, 00:14:03.984 "zoned": false, 00:14:03.984 "supported_io_types": { 00:14:03.984 "read": true, 00:14:03.984 "write": true, 00:14:03.984 "unmap": true, 00:14:03.984 "flush": true, 00:14:03.984 "reset": true, 00:14:03.984 "nvme_admin": false, 00:14:03.984 "nvme_io": false, 00:14:03.984 "nvme_io_md": false, 00:14:03.984 "write_zeroes": true, 00:14:03.984 "zcopy": true, 00:14:03.984 "get_zone_info": false, 00:14:03.984 "zone_management": false, 00:14:03.984 "zone_append": false, 00:14:03.984 "compare": false, 00:14:03.984 "compare_and_write": false, 00:14:03.984 "abort": true, 00:14:03.984 "seek_hole": false, 00:14:03.984 "seek_data": false, 00:14:03.984 "copy": true, 00:14:03.984 "nvme_iov_md": false 00:14:03.984 }, 00:14:03.984 "memory_domains": [ 00:14:03.984 { 00:14:03.984 "dma_device_id": "system", 00:14:03.984 "dma_device_type": 1 00:14:03.984 }, 00:14:03.984 { 00:14:03.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.984 "dma_device_type": 2 00:14:03.984 } 00:14:03.984 ], 00:14:03.984 "driver_specific": {} 00:14:03.984 } 00:14:03.984 ] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 [2024-11-28 18:54:33.527149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.984 [2024-11-28 18:54:33.527275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.984 [2024-11-28 18:54:33.527312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.984 [2024-11-28 18:54:33.529177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.984 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.244 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.244 "name": "Existed_Raid", 00:14:04.244 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:04.244 "strip_size_kb": 64, 00:14:04.244 "state": "configuring", 00:14:04.244 "raid_level": "raid5f", 00:14:04.244 "superblock": true, 00:14:04.244 "num_base_bdevs": 3, 00:14:04.244 "num_base_bdevs_discovered": 2, 00:14:04.244 "num_base_bdevs_operational": 3, 00:14:04.244 "base_bdevs_list": [ 00:14:04.244 { 00:14:04.244 "name": "BaseBdev1", 00:14:04.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.244 "is_configured": false, 00:14:04.244 "data_offset": 0, 00:14:04.244 "data_size": 0 00:14:04.244 }, 00:14:04.244 { 00:14:04.244 "name": "BaseBdev2", 00:14:04.244 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:04.244 "is_configured": true, 00:14:04.244 "data_offset": 2048, 00:14:04.244 "data_size": 63488 00:14:04.244 }, 00:14:04.244 { 00:14:04.244 "name": "BaseBdev3", 00:14:04.244 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:04.244 "is_configured": true, 00:14:04.244 "data_offset": 2048, 00:14:04.244 "data_size": 63488 00:14:04.244 } 00:14:04.244 ] 00:14:04.244 }' 00:14:04.244 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.244 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.504 [2024-11-28 18:54:33.979241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.504 18:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.504 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.504 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.504 "name": "Existed_Raid", 00:14:04.504 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:04.504 "strip_size_kb": 64, 00:14:04.504 "state": "configuring", 00:14:04.505 "raid_level": "raid5f", 00:14:04.505 "superblock": true, 00:14:04.505 "num_base_bdevs": 3, 00:14:04.505 "num_base_bdevs_discovered": 1, 00:14:04.505 "num_base_bdevs_operational": 3, 00:14:04.505 "base_bdevs_list": [ 00:14:04.505 { 00:14:04.505 "name": "BaseBdev1", 00:14:04.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.505 "is_configured": false, 00:14:04.505 "data_offset": 0, 00:14:04.505 "data_size": 0 00:14:04.505 }, 00:14:04.505 { 00:14:04.505 "name": null, 00:14:04.505 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:04.505 "is_configured": false, 00:14:04.505 "data_offset": 0, 00:14:04.505 "data_size": 63488 00:14:04.505 }, 00:14:04.505 { 00:14:04.505 "name": "BaseBdev3", 00:14:04.505 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:04.505 "is_configured": true, 00:14:04.505 "data_offset": 2048, 00:14:04.505 "data_size": 63488 00:14:04.505 } 00:14:04.505 ] 00:14:04.505 }' 00:14:04.505 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.505 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 [2024-11-28 18:54:34.470517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.075 BaseBdev1 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 [ 00:14:05.075 { 00:14:05.075 "name": "BaseBdev1", 00:14:05.075 "aliases": [ 00:14:05.075 "038f944d-58f0-49de-874d-af7af1f3fd61" 00:14:05.075 ], 00:14:05.075 "product_name": "Malloc disk", 00:14:05.075 "block_size": 512, 00:14:05.075 "num_blocks": 65536, 00:14:05.075 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:05.075 "assigned_rate_limits": { 00:14:05.075 "rw_ios_per_sec": 0, 00:14:05.075 "rw_mbytes_per_sec": 0, 00:14:05.075 "r_mbytes_per_sec": 0, 00:14:05.075 "w_mbytes_per_sec": 0 00:14:05.075 }, 00:14:05.075 "claimed": true, 00:14:05.075 "claim_type": "exclusive_write", 00:14:05.075 "zoned": false, 00:14:05.075 "supported_io_types": { 00:14:05.075 "read": true, 00:14:05.075 "write": true, 00:14:05.075 "unmap": true, 00:14:05.075 "flush": true, 00:14:05.075 "reset": true, 00:14:05.075 "nvme_admin": false, 00:14:05.075 "nvme_io": false, 00:14:05.075 "nvme_io_md": false, 00:14:05.075 "write_zeroes": true, 00:14:05.075 "zcopy": true, 00:14:05.075 "get_zone_info": false, 00:14:05.075 "zone_management": false, 00:14:05.075 "zone_append": false, 00:14:05.075 "compare": false, 00:14:05.075 "compare_and_write": false, 00:14:05.075 "abort": true, 00:14:05.075 "seek_hole": false, 00:14:05.075 "seek_data": false, 00:14:05.075 "copy": true, 00:14:05.075 "nvme_iov_md": false 00:14:05.075 }, 00:14:05.075 "memory_domains": [ 00:14:05.075 { 00:14:05.075 "dma_device_id": "system", 00:14:05.075 "dma_device_type": 1 00:14:05.075 }, 00:14:05.075 { 00:14:05.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.075 "dma_device_type": 2 00:14:05.075 } 00:14:05.075 ], 00:14:05.075 "driver_specific": {} 00:14:05.075 } 00:14:05.075 ] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.075 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.076 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.076 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.076 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.076 "name": "Existed_Raid", 00:14:05.076 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:05.076 "strip_size_kb": 64, 00:14:05.076 "state": "configuring", 00:14:05.076 "raid_level": "raid5f", 00:14:05.076 "superblock": true, 00:14:05.076 "num_base_bdevs": 3, 00:14:05.076 "num_base_bdevs_discovered": 2, 00:14:05.076 "num_base_bdevs_operational": 3, 00:14:05.076 "base_bdevs_list": [ 00:14:05.076 { 00:14:05.076 "name": "BaseBdev1", 00:14:05.076 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:05.076 "is_configured": true, 00:14:05.076 "data_offset": 2048, 00:14:05.076 "data_size": 63488 00:14:05.076 }, 00:14:05.076 { 00:14:05.076 "name": null, 00:14:05.076 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:05.076 "is_configured": false, 00:14:05.076 "data_offset": 0, 00:14:05.076 "data_size": 63488 00:14:05.076 }, 00:14:05.076 { 00:14:05.076 "name": "BaseBdev3", 00:14:05.076 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:05.076 "is_configured": true, 00:14:05.076 "data_offset": 2048, 00:14:05.076 "data_size": 63488 00:14:05.076 } 00:14:05.076 ] 00:14:05.076 }' 00:14:05.076 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.076 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.646 18:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 [2024-11-28 18:54:35.006700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.646 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.646 "name": "Existed_Raid", 00:14:05.646 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:05.646 "strip_size_kb": 64, 00:14:05.646 "state": "configuring", 00:14:05.646 "raid_level": "raid5f", 00:14:05.646 "superblock": true, 00:14:05.646 "num_base_bdevs": 3, 00:14:05.646 "num_base_bdevs_discovered": 1, 00:14:05.646 "num_base_bdevs_operational": 3, 00:14:05.646 "base_bdevs_list": [ 00:14:05.646 { 00:14:05.646 "name": "BaseBdev1", 00:14:05.646 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:05.646 "is_configured": true, 00:14:05.646 "data_offset": 2048, 00:14:05.646 "data_size": 63488 00:14:05.647 }, 00:14:05.647 { 00:14:05.647 "name": null, 00:14:05.647 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:05.647 "is_configured": false, 00:14:05.647 "data_offset": 0, 00:14:05.647 "data_size": 63488 00:14:05.647 }, 00:14:05.647 { 00:14:05.647 "name": null, 00:14:05.647 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:05.647 "is_configured": false, 00:14:05.647 "data_offset": 0, 00:14:05.647 "data_size": 63488 00:14:05.647 } 00:14:05.647 ] 00:14:05.647 }' 00:14:05.647 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.647 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.909 [2024-11-28 18:54:35.422828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.909 "name": "Existed_Raid", 00:14:05.909 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:05.909 "strip_size_kb": 64, 00:14:05.909 "state": "configuring", 00:14:05.909 "raid_level": "raid5f", 00:14:05.909 "superblock": true, 00:14:05.909 "num_base_bdevs": 3, 00:14:05.909 "num_base_bdevs_discovered": 2, 00:14:05.909 "num_base_bdevs_operational": 3, 00:14:05.909 "base_bdevs_list": [ 00:14:05.909 { 00:14:05.909 "name": "BaseBdev1", 00:14:05.909 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:05.909 "is_configured": true, 00:14:05.909 "data_offset": 2048, 00:14:05.909 "data_size": 63488 00:14:05.909 }, 00:14:05.909 { 00:14:05.909 "name": null, 00:14:05.909 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:05.909 "is_configured": false, 00:14:05.909 "data_offset": 0, 00:14:05.909 "data_size": 63488 00:14:05.909 }, 00:14:05.909 { 00:14:05.909 "name": "BaseBdev3", 00:14:05.909 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:05.909 "is_configured": true, 00:14:05.909 "data_offset": 2048, 00:14:05.909 "data_size": 63488 00:14:05.909 } 00:14:05.909 ] 00:14:05.909 }' 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.909 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.479 [2024-11-28 18:54:35.954979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.479 18:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.479 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.479 "name": "Existed_Raid", 00:14:06.479 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:06.479 "strip_size_kb": 64, 00:14:06.479 "state": "configuring", 00:14:06.479 "raid_level": "raid5f", 00:14:06.479 "superblock": true, 00:14:06.479 "num_base_bdevs": 3, 00:14:06.479 "num_base_bdevs_discovered": 1, 00:14:06.479 "num_base_bdevs_operational": 3, 00:14:06.479 "base_bdevs_list": [ 00:14:06.479 { 00:14:06.479 "name": null, 00:14:06.479 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:06.479 "is_configured": false, 00:14:06.479 "data_offset": 0, 00:14:06.479 "data_size": 63488 00:14:06.479 }, 00:14:06.479 { 00:14:06.479 "name": null, 00:14:06.479 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:06.479 "is_configured": false, 00:14:06.479 "data_offset": 0, 00:14:06.479 "data_size": 63488 00:14:06.479 }, 00:14:06.479 { 00:14:06.479 "name": "BaseBdev3", 00:14:06.479 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:06.479 "is_configured": true, 00:14:06.479 "data_offset": 2048, 00:14:06.479 "data_size": 63488 00:14:06.479 } 00:14:06.479 ] 00:14:06.479 }' 00:14:06.479 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.479 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 [2024-11-28 18:54:36.401582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.049 "name": "Existed_Raid", 00:14:07.049 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:07.049 "strip_size_kb": 64, 00:14:07.049 "state": "configuring", 00:14:07.049 "raid_level": "raid5f", 00:14:07.049 "superblock": true, 00:14:07.049 "num_base_bdevs": 3, 00:14:07.049 "num_base_bdevs_discovered": 2, 00:14:07.049 "num_base_bdevs_operational": 3, 00:14:07.049 "base_bdevs_list": [ 00:14:07.049 { 00:14:07.049 "name": null, 00:14:07.049 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:07.049 "is_configured": false, 00:14:07.049 "data_offset": 0, 00:14:07.049 "data_size": 63488 00:14:07.049 }, 00:14:07.049 { 00:14:07.049 "name": "BaseBdev2", 00:14:07.049 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:07.049 "is_configured": true, 00:14:07.049 "data_offset": 2048, 00:14:07.049 "data_size": 63488 00:14:07.049 }, 00:14:07.049 { 00:14:07.049 "name": "BaseBdev3", 00:14:07.049 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:07.049 "is_configured": true, 00:14:07.049 "data_offset": 2048, 00:14:07.049 "data_size": 63488 00:14:07.049 } 00:14:07.049 ] 00:14:07.049 }' 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.049 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:07.309 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 038f944d-58f0-49de-874d-af7af1f3fd61 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.570 [2024-11-28 18:54:36.976539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:07.570 NewBaseBdev 00:14:07.570 [2024-11-28 18:54:36.976779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:07.570 [2024-11-28 18:54:36.976795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:07.570 [2024-11-28 18:54:36.977045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:07.570 [2024-11-28 18:54:36.977460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:07.570 [2024-11-28 18:54:36.977478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:07.570 [2024-11-28 18:54:36.977575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.570 18:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.570 [ 00:14:07.570 { 00:14:07.570 "name": "NewBaseBdev", 00:14:07.570 "aliases": [ 00:14:07.570 "038f944d-58f0-49de-874d-af7af1f3fd61" 00:14:07.570 ], 00:14:07.570 "product_name": "Malloc disk", 00:14:07.570 "block_size": 512, 00:14:07.570 "num_blocks": 65536, 00:14:07.570 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:07.570 "assigned_rate_limits": { 00:14:07.570 "rw_ios_per_sec": 0, 00:14:07.570 "rw_mbytes_per_sec": 0, 00:14:07.570 "r_mbytes_per_sec": 0, 00:14:07.570 "w_mbytes_per_sec": 0 00:14:07.570 }, 00:14:07.570 "claimed": true, 00:14:07.570 "claim_type": "exclusive_write", 00:14:07.570 "zoned": false, 00:14:07.570 "supported_io_types": { 00:14:07.570 "read": true, 00:14:07.570 "write": true, 00:14:07.570 "unmap": true, 00:14:07.570 "flush": true, 00:14:07.570 "reset": true, 00:14:07.570 "nvme_admin": false, 00:14:07.570 "nvme_io": false, 00:14:07.570 "nvme_io_md": false, 00:14:07.570 "write_zeroes": true, 00:14:07.570 "zcopy": true, 00:14:07.570 "get_zone_info": false, 00:14:07.570 "zone_management": false, 00:14:07.570 "zone_append": false, 00:14:07.570 "compare": false, 00:14:07.570 "compare_and_write": false, 00:14:07.570 "abort": true, 00:14:07.570 "seek_hole": false, 00:14:07.570 "seek_data": false, 00:14:07.570 "copy": true, 00:14:07.570 "nvme_iov_md": false 00:14:07.570 }, 00:14:07.570 "memory_domains": [ 00:14:07.570 { 00:14:07.570 "dma_device_id": "system", 00:14:07.570 "dma_device_type": 1 00:14:07.570 }, 00:14:07.570 { 00:14:07.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.570 "dma_device_type": 2 00:14:07.570 } 00:14:07.570 ], 00:14:07.570 "driver_specific": {} 00:14:07.570 } 00:14:07.570 ] 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.570 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.571 "name": "Existed_Raid", 00:14:07.571 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:07.571 "strip_size_kb": 64, 00:14:07.571 "state": "online", 00:14:07.571 "raid_level": "raid5f", 00:14:07.571 "superblock": true, 00:14:07.571 "num_base_bdevs": 3, 00:14:07.571 "num_base_bdevs_discovered": 3, 00:14:07.571 "num_base_bdevs_operational": 3, 00:14:07.571 "base_bdevs_list": [ 00:14:07.571 { 00:14:07.571 "name": "NewBaseBdev", 00:14:07.571 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:07.571 "is_configured": true, 00:14:07.571 "data_offset": 2048, 00:14:07.571 "data_size": 63488 00:14:07.571 }, 00:14:07.571 { 00:14:07.571 "name": "BaseBdev2", 00:14:07.571 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:07.571 "is_configured": true, 00:14:07.571 "data_offset": 2048, 00:14:07.571 "data_size": 63488 00:14:07.571 }, 00:14:07.571 { 00:14:07.571 "name": "BaseBdev3", 00:14:07.571 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:07.571 "is_configured": true, 00:14:07.571 "data_offset": 2048, 00:14:07.571 "data_size": 63488 00:14:07.571 } 00:14:07.571 ] 00:14:07.571 }' 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.571 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.142 [2024-11-28 18:54:37.516915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.142 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.142 "name": "Existed_Raid", 00:14:08.142 "aliases": [ 00:14:08.142 "8592451c-e8aa-4eab-8c08-ca8e8678971c" 00:14:08.142 ], 00:14:08.142 "product_name": "Raid Volume", 00:14:08.142 "block_size": 512, 00:14:08.142 "num_blocks": 126976, 00:14:08.142 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:08.142 "assigned_rate_limits": { 00:14:08.142 "rw_ios_per_sec": 0, 00:14:08.142 "rw_mbytes_per_sec": 0, 00:14:08.142 "r_mbytes_per_sec": 0, 00:14:08.142 "w_mbytes_per_sec": 0 00:14:08.142 }, 00:14:08.142 "claimed": false, 00:14:08.142 "zoned": false, 00:14:08.142 "supported_io_types": { 00:14:08.142 "read": true, 00:14:08.142 "write": true, 00:14:08.142 "unmap": false, 00:14:08.142 "flush": false, 00:14:08.142 "reset": true, 00:14:08.142 "nvme_admin": false, 00:14:08.142 "nvme_io": false, 00:14:08.142 "nvme_io_md": false, 00:14:08.142 "write_zeroes": true, 00:14:08.142 "zcopy": false, 00:14:08.142 "get_zone_info": false, 00:14:08.142 "zone_management": false, 00:14:08.142 "zone_append": false, 00:14:08.142 "compare": false, 00:14:08.142 "compare_and_write": false, 00:14:08.142 "abort": false, 00:14:08.142 "seek_hole": false, 00:14:08.142 "seek_data": false, 00:14:08.142 "copy": false, 00:14:08.142 "nvme_iov_md": false 00:14:08.142 }, 00:14:08.142 "driver_specific": { 00:14:08.142 "raid": { 00:14:08.142 "uuid": "8592451c-e8aa-4eab-8c08-ca8e8678971c", 00:14:08.142 "strip_size_kb": 64, 00:14:08.142 "state": "online", 00:14:08.142 "raid_level": "raid5f", 00:14:08.142 "superblock": true, 00:14:08.142 "num_base_bdevs": 3, 00:14:08.142 "num_base_bdevs_discovered": 3, 00:14:08.142 "num_base_bdevs_operational": 3, 00:14:08.142 "base_bdevs_list": [ 00:14:08.142 { 00:14:08.142 "name": "NewBaseBdev", 00:14:08.142 "uuid": "038f944d-58f0-49de-874d-af7af1f3fd61", 00:14:08.142 "is_configured": true, 00:14:08.142 "data_offset": 2048, 00:14:08.142 "data_size": 63488 00:14:08.142 }, 00:14:08.143 { 00:14:08.143 "name": "BaseBdev2", 00:14:08.143 "uuid": "93cef4f9-2f8e-4167-ae28-cd013d66b2aa", 00:14:08.143 "is_configured": true, 00:14:08.143 "data_offset": 2048, 00:14:08.143 "data_size": 63488 00:14:08.143 }, 00:14:08.143 { 00:14:08.143 "name": "BaseBdev3", 00:14:08.143 "uuid": "ba698d9f-3b8f-4ec2-8341-989458d097ae", 00:14:08.143 "is_configured": true, 00:14:08.143 "data_offset": 2048, 00:14:08.143 "data_size": 63488 00:14:08.143 } 00:14:08.143 ] 00:14:08.143 } 00:14:08.143 } 00:14:08.143 }' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:08.143 BaseBdev2 00:14:08.143 BaseBdev3' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.143 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.407 [2024-11-28 18:54:37.816818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.407 [2024-11-28 18:54:37.816845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.407 [2024-11-28 18:54:37.816906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.407 [2024-11-28 18:54:37.817148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.407 [2024-11-28 18:54:37.817156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92538 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92538 ']' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 92538 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92538 00:14:08.407 killing process with pid 92538 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92538' 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 92538 00:14:08.407 [2024-11-28 18:54:37.867289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.407 18:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 92538 00:14:08.407 [2024-11-28 18:54:37.898297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.668 18:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:08.668 ************************************ 00:14:08.668 END TEST raid5f_state_function_test_sb 00:14:08.668 ************************************ 00:14:08.668 00:14:08.668 real 0m9.006s 00:14:08.668 user 0m15.292s 00:14:08.668 sys 0m1.964s 00:14:08.668 18:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.668 18:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.668 18:54:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:08.668 18:54:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:08.668 18:54:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.668 18:54:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.668 ************************************ 00:14:08.668 START TEST raid5f_superblock_test 00:14:08.668 ************************************ 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93148 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93148 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93148 ']' 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.668 18:54:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.929 [2024-11-28 18:54:38.312182] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:08.929 [2024-11-28 18:54:38.312340] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93148 ] 00:14:08.929 [2024-11-28 18:54:38.452376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.929 [2024-11-28 18:54:38.492770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.929 [2024-11-28 18:54:38.519459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.189 [2024-11-28 18:54:38.564232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.189 [2024-11-28 18:54:38.564273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.761 malloc1 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.761 [2024-11-28 18:54:39.133934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:09.761 [2024-11-28 18:54:39.134102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.761 [2024-11-28 18:54:39.134146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:09.761 [2024-11-28 18:54:39.134177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.761 [2024-11-28 18:54:39.136308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.761 [2024-11-28 18:54:39.136378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:09.761 pt1 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:09.761 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 malloc2 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 [2024-11-28 18:54:39.166602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:09.762 [2024-11-28 18:54:39.166718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.762 [2024-11-28 18:54:39.166752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:09.762 [2024-11-28 18:54:39.166779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.762 [2024-11-28 18:54:39.168789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.762 [2024-11-28 18:54:39.168859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:09.762 pt2 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 malloc3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 [2024-11-28 18:54:39.195217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:09.762 [2024-11-28 18:54:39.195328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.762 [2024-11-28 18:54:39.195364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:09.762 [2024-11-28 18:54:39.195390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.762 [2024-11-28 18:54:39.197434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.762 [2024-11-28 18:54:39.197509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:09.762 pt3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 [2024-11-28 18:54:39.207270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:09.762 [2024-11-28 18:54:39.209141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.762 [2024-11-28 18:54:39.209236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.762 [2024-11-28 18:54:39.209404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:09.762 [2024-11-28 18:54:39.209462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:09.762 [2024-11-28 18:54:39.209703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:09.762 [2024-11-28 18:54:39.210139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:09.762 [2024-11-28 18:54:39.210185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:09.762 [2024-11-28 18:54:39.210337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.762 "name": "raid_bdev1", 00:14:09.762 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:09.762 "strip_size_kb": 64, 00:14:09.762 "state": "online", 00:14:09.762 "raid_level": "raid5f", 00:14:09.762 "superblock": true, 00:14:09.762 "num_base_bdevs": 3, 00:14:09.762 "num_base_bdevs_discovered": 3, 00:14:09.762 "num_base_bdevs_operational": 3, 00:14:09.762 "base_bdevs_list": [ 00:14:09.762 { 00:14:09.762 "name": "pt1", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:09.762 "is_configured": true, 00:14:09.762 "data_offset": 2048, 00:14:09.762 "data_size": 63488 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "name": "pt2", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.762 "is_configured": true, 00:14:09.762 "data_offset": 2048, 00:14:09.762 "data_size": 63488 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "name": "pt3", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.762 "is_configured": true, 00:14:09.762 "data_offset": 2048, 00:14:09.762 "data_size": 63488 00:14:09.762 } 00:14:09.762 ] 00:14:09.762 }' 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.762 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.339 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:10.339 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:10.340 [2024-11-28 18:54:39.692019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:10.340 "name": "raid_bdev1", 00:14:10.340 "aliases": [ 00:14:10.340 "7c58b386-cce7-49d0-8888-e19bda9c5ac3" 00:14:10.340 ], 00:14:10.340 "product_name": "Raid Volume", 00:14:10.340 "block_size": 512, 00:14:10.340 "num_blocks": 126976, 00:14:10.340 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:10.340 "assigned_rate_limits": { 00:14:10.340 "rw_ios_per_sec": 0, 00:14:10.340 "rw_mbytes_per_sec": 0, 00:14:10.340 "r_mbytes_per_sec": 0, 00:14:10.340 "w_mbytes_per_sec": 0 00:14:10.340 }, 00:14:10.340 "claimed": false, 00:14:10.340 "zoned": false, 00:14:10.340 "supported_io_types": { 00:14:10.340 "read": true, 00:14:10.340 "write": true, 00:14:10.340 "unmap": false, 00:14:10.340 "flush": false, 00:14:10.340 "reset": true, 00:14:10.340 "nvme_admin": false, 00:14:10.340 "nvme_io": false, 00:14:10.340 "nvme_io_md": false, 00:14:10.340 "write_zeroes": true, 00:14:10.340 "zcopy": false, 00:14:10.340 "get_zone_info": false, 00:14:10.340 "zone_management": false, 00:14:10.340 "zone_append": false, 00:14:10.340 "compare": false, 00:14:10.340 "compare_and_write": false, 00:14:10.340 "abort": false, 00:14:10.340 "seek_hole": false, 00:14:10.340 "seek_data": false, 00:14:10.340 "copy": false, 00:14:10.340 "nvme_iov_md": false 00:14:10.340 }, 00:14:10.340 "driver_specific": { 00:14:10.340 "raid": { 00:14:10.340 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:10.340 "strip_size_kb": 64, 00:14:10.340 "state": "online", 00:14:10.340 "raid_level": "raid5f", 00:14:10.340 "superblock": true, 00:14:10.340 "num_base_bdevs": 3, 00:14:10.340 "num_base_bdevs_discovered": 3, 00:14:10.340 "num_base_bdevs_operational": 3, 00:14:10.340 "base_bdevs_list": [ 00:14:10.340 { 00:14:10.340 "name": "pt1", 00:14:10.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.340 "is_configured": true, 00:14:10.340 "data_offset": 2048, 00:14:10.340 "data_size": 63488 00:14:10.340 }, 00:14:10.340 { 00:14:10.340 "name": "pt2", 00:14:10.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.340 "is_configured": true, 00:14:10.340 "data_offset": 2048, 00:14:10.340 "data_size": 63488 00:14:10.340 }, 00:14:10.340 { 00:14:10.340 "name": "pt3", 00:14:10.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.340 "is_configured": true, 00:14:10.340 "data_offset": 2048, 00:14:10.340 "data_size": 63488 00:14:10.340 } 00:14:10.340 ] 00:14:10.340 } 00:14:10.340 } 00:14:10.340 }' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:10.340 pt2 00:14:10.340 pt3' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.340 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 [2024-11-28 18:54:39.996077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7c58b386-cce7-49d0-8888-e19bda9c5ac3 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7c58b386-cce7-49d0-8888-e19bda9c5ac3 ']' 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 [2024-11-28 18:54:40.039909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.602 [2024-11-28 18:54:40.039935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.602 [2024-11-28 18:54:40.040004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.602 [2024-11-28 18:54:40.040077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.602 [2024-11-28 18:54:40.040088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.602 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.602 [2024-11-28 18:54:40.184144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:10.602 [2024-11-28 18:54:40.186070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:10.602 [2024-11-28 18:54:40.186158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:10.602 [2024-11-28 18:54:40.186220] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:10.602 [2024-11-28 18:54:40.186318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:10.602 [2024-11-28 18:54:40.186398] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:10.602 [2024-11-28 18:54:40.186447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.602 [2024-11-28 18:54:40.186465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:14:10.602 request: 00:14:10.602 { 00:14:10.602 "name": "raid_bdev1", 00:14:10.603 "raid_level": "raid5f", 00:14:10.603 "base_bdevs": [ 00:14:10.603 "malloc1", 00:14:10.603 "malloc2", 00:14:10.603 "malloc3" 00:14:10.603 ], 00:14:10.603 "strip_size_kb": 64, 00:14:10.603 "superblock": false, 00:14:10.603 "method": "bdev_raid_create", 00:14:10.603 "req_id": 1 00:14:10.603 } 00:14:10.603 Got JSON-RPC error response 00:14:10.603 response: 00:14:10.603 { 00:14:10.603 "code": -17, 00:14:10.603 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:10.603 } 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.603 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.863 [2024-11-28 18:54:40.252117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:10.863 [2024-11-28 18:54:40.252203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.863 [2024-11-28 18:54:40.252236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:10.863 [2024-11-28 18:54:40.252263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.863 [2024-11-28 18:54:40.254350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.863 [2024-11-28 18:54:40.254435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:10.863 [2024-11-28 18:54:40.254524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:10.863 [2024-11-28 18:54:40.254572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:10.863 pt1 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.863 "name": "raid_bdev1", 00:14:10.863 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:10.863 "strip_size_kb": 64, 00:14:10.863 "state": "configuring", 00:14:10.863 "raid_level": "raid5f", 00:14:10.863 "superblock": true, 00:14:10.863 "num_base_bdevs": 3, 00:14:10.863 "num_base_bdevs_discovered": 1, 00:14:10.863 "num_base_bdevs_operational": 3, 00:14:10.863 "base_bdevs_list": [ 00:14:10.863 { 00:14:10.863 "name": "pt1", 00:14:10.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.863 "is_configured": true, 00:14:10.863 "data_offset": 2048, 00:14:10.863 "data_size": 63488 00:14:10.863 }, 00:14:10.863 { 00:14:10.863 "name": null, 00:14:10.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.863 "is_configured": false, 00:14:10.863 "data_offset": 2048, 00:14:10.863 "data_size": 63488 00:14:10.863 }, 00:14:10.863 { 00:14:10.863 "name": null, 00:14:10.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.863 "is_configured": false, 00:14:10.863 "data_offset": 2048, 00:14:10.863 "data_size": 63488 00:14:10.863 } 00:14:10.863 ] 00:14:10.863 }' 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.863 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.433 [2024-11-28 18:54:40.744270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:11.433 [2024-11-28 18:54:40.744379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.433 [2024-11-28 18:54:40.744406] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:11.433 [2024-11-28 18:54:40.744415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.433 [2024-11-28 18:54:40.744773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.433 [2024-11-28 18:54:40.744790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:11.433 [2024-11-28 18:54:40.744851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:11.433 [2024-11-28 18:54:40.744874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:11.433 pt2 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.433 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.433 [2024-11-28 18:54:40.756314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.434 "name": "raid_bdev1", 00:14:11.434 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:11.434 "strip_size_kb": 64, 00:14:11.434 "state": "configuring", 00:14:11.434 "raid_level": "raid5f", 00:14:11.434 "superblock": true, 00:14:11.434 "num_base_bdevs": 3, 00:14:11.434 "num_base_bdevs_discovered": 1, 00:14:11.434 "num_base_bdevs_operational": 3, 00:14:11.434 "base_bdevs_list": [ 00:14:11.434 { 00:14:11.434 "name": "pt1", 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:11.434 "is_configured": true, 00:14:11.434 "data_offset": 2048, 00:14:11.434 "data_size": 63488 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "name": null, 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.434 "is_configured": false, 00:14:11.434 "data_offset": 0, 00:14:11.434 "data_size": 63488 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "name": null, 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.434 "is_configured": false, 00:14:11.434 "data_offset": 2048, 00:14:11.434 "data_size": 63488 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }' 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.434 18:54:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.695 [2024-11-28 18:54:41.236404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:11.695 [2024-11-28 18:54:41.236516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.695 [2024-11-28 18:54:41.236547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:11.695 [2024-11-28 18:54:41.236575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.695 [2024-11-28 18:54:41.236929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.695 [2024-11-28 18:54:41.236988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:11.695 [2024-11-28 18:54:41.237070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:11.695 [2024-11-28 18:54:41.237119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:11.695 pt2 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.695 [2024-11-28 18:54:41.248388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:11.695 [2024-11-28 18:54:41.248490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.695 [2024-11-28 18:54:41.248518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:11.695 [2024-11-28 18:54:41.248545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.695 [2024-11-28 18:54:41.248865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.695 [2024-11-28 18:54:41.248923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:11.695 [2024-11-28 18:54:41.248996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:11.695 [2024-11-28 18:54:41.249043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:11.695 [2024-11-28 18:54:41.249163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:11.695 [2024-11-28 18:54:41.249208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:11.695 [2024-11-28 18:54:41.249472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:11.695 [2024-11-28 18:54:41.249892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:11.695 [2024-11-28 18:54:41.249939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:11.695 [2024-11-28 18:54:41.250068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.695 pt3 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.695 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.955 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.955 "name": "raid_bdev1", 00:14:11.955 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:11.955 "strip_size_kb": 64, 00:14:11.955 "state": "online", 00:14:11.955 "raid_level": "raid5f", 00:14:11.955 "superblock": true, 00:14:11.955 "num_base_bdevs": 3, 00:14:11.955 "num_base_bdevs_discovered": 3, 00:14:11.955 "num_base_bdevs_operational": 3, 00:14:11.955 "base_bdevs_list": [ 00:14:11.955 { 00:14:11.955 "name": "pt1", 00:14:11.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:11.955 "is_configured": true, 00:14:11.955 "data_offset": 2048, 00:14:11.955 "data_size": 63488 00:14:11.955 }, 00:14:11.955 { 00:14:11.955 "name": "pt2", 00:14:11.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.955 "is_configured": true, 00:14:11.955 "data_offset": 2048, 00:14:11.955 "data_size": 63488 00:14:11.955 }, 00:14:11.955 { 00:14:11.955 "name": "pt3", 00:14:11.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.955 "is_configured": true, 00:14:11.955 "data_offset": 2048, 00:14:11.955 "data_size": 63488 00:14:11.955 } 00:14:11.955 ] 00:14:11.955 }' 00:14:11.955 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.955 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.215 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.216 [2024-11-28 18:54:41.672707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.216 "name": "raid_bdev1", 00:14:12.216 "aliases": [ 00:14:12.216 "7c58b386-cce7-49d0-8888-e19bda9c5ac3" 00:14:12.216 ], 00:14:12.216 "product_name": "Raid Volume", 00:14:12.216 "block_size": 512, 00:14:12.216 "num_blocks": 126976, 00:14:12.216 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:12.216 "assigned_rate_limits": { 00:14:12.216 "rw_ios_per_sec": 0, 00:14:12.216 "rw_mbytes_per_sec": 0, 00:14:12.216 "r_mbytes_per_sec": 0, 00:14:12.216 "w_mbytes_per_sec": 0 00:14:12.216 }, 00:14:12.216 "claimed": false, 00:14:12.216 "zoned": false, 00:14:12.216 "supported_io_types": { 00:14:12.216 "read": true, 00:14:12.216 "write": true, 00:14:12.216 "unmap": false, 00:14:12.216 "flush": false, 00:14:12.216 "reset": true, 00:14:12.216 "nvme_admin": false, 00:14:12.216 "nvme_io": false, 00:14:12.216 "nvme_io_md": false, 00:14:12.216 "write_zeroes": true, 00:14:12.216 "zcopy": false, 00:14:12.216 "get_zone_info": false, 00:14:12.216 "zone_management": false, 00:14:12.216 "zone_append": false, 00:14:12.216 "compare": false, 00:14:12.216 "compare_and_write": false, 00:14:12.216 "abort": false, 00:14:12.216 "seek_hole": false, 00:14:12.216 "seek_data": false, 00:14:12.216 "copy": false, 00:14:12.216 "nvme_iov_md": false 00:14:12.216 }, 00:14:12.216 "driver_specific": { 00:14:12.216 "raid": { 00:14:12.216 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:12.216 "strip_size_kb": 64, 00:14:12.216 "state": "online", 00:14:12.216 "raid_level": "raid5f", 00:14:12.216 "superblock": true, 00:14:12.216 "num_base_bdevs": 3, 00:14:12.216 "num_base_bdevs_discovered": 3, 00:14:12.216 "num_base_bdevs_operational": 3, 00:14:12.216 "base_bdevs_list": [ 00:14:12.216 { 00:14:12.216 "name": "pt1", 00:14:12.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 }, 00:14:12.216 { 00:14:12.216 "name": "pt2", 00:14:12.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 }, 00:14:12.216 { 00:14:12.216 "name": "pt3", 00:14:12.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 } 00:14:12.216 ] 00:14:12.216 } 00:14:12.216 } 00:14:12.216 }' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:12.216 pt2 00:14:12.216 pt3' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.216 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 [2024-11-28 18:54:41.956802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7c58b386-cce7-49d0-8888-e19bda9c5ac3 '!=' 7c58b386-cce7-49d0-8888-e19bda9c5ac3 ']' 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 18:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 [2024-11-28 18:54:42.004649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.476 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.477 "name": "raid_bdev1", 00:14:12.477 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:12.477 "strip_size_kb": 64, 00:14:12.477 "state": "online", 00:14:12.477 "raid_level": "raid5f", 00:14:12.477 "superblock": true, 00:14:12.477 "num_base_bdevs": 3, 00:14:12.477 "num_base_bdevs_discovered": 2, 00:14:12.477 "num_base_bdevs_operational": 2, 00:14:12.477 "base_bdevs_list": [ 00:14:12.477 { 00:14:12.477 "name": null, 00:14:12.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.477 "is_configured": false, 00:14:12.477 "data_offset": 0, 00:14:12.477 "data_size": 63488 00:14:12.477 }, 00:14:12.477 { 00:14:12.477 "name": "pt2", 00:14:12.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.477 "is_configured": true, 00:14:12.477 "data_offset": 2048, 00:14:12.477 "data_size": 63488 00:14:12.477 }, 00:14:12.477 { 00:14:12.477 "name": "pt3", 00:14:12.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.477 "is_configured": true, 00:14:12.477 "data_offset": 2048, 00:14:12.477 "data_size": 63488 00:14:12.477 } 00:14:12.477 ] 00:14:12.477 }' 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.477 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 [2024-11-28 18:54:42.460727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.048 [2024-11-28 18:54:42.460803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.048 [2024-11-28 18:54:42.460889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.048 [2024-11-28 18:54:42.460950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.048 [2024-11-28 18:54:42.460983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 [2024-11-28 18:54:42.544760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.048 [2024-11-28 18:54:42.544810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.048 [2024-11-28 18:54:42.544826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:13.048 [2024-11-28 18:54:42.544836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.048 [2024-11-28 18:54:42.546845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.048 [2024-11-28 18:54:42.546924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.048 [2024-11-28 18:54:42.546985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:13.048 [2024-11-28 18:54:42.547019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.048 pt2 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.048 "name": "raid_bdev1", 00:14:13.048 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:13.048 "strip_size_kb": 64, 00:14:13.048 "state": "configuring", 00:14:13.048 "raid_level": "raid5f", 00:14:13.048 "superblock": true, 00:14:13.048 "num_base_bdevs": 3, 00:14:13.048 "num_base_bdevs_discovered": 1, 00:14:13.048 "num_base_bdevs_operational": 2, 00:14:13.048 "base_bdevs_list": [ 00:14:13.048 { 00:14:13.048 "name": null, 00:14:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.048 "is_configured": false, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 }, 00:14:13.048 { 00:14:13.048 "name": "pt2", 00:14:13.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.048 "is_configured": true, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 }, 00:14:13.048 { 00:14:13.048 "name": null, 00:14:13.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.048 "is_configured": false, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 } 00:14:13.048 ] 00:14:13.048 }' 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.048 18:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.619 [2024-11-28 18:54:43.024899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:13.619 [2024-11-28 18:54:43.025006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.619 [2024-11-28 18:54:43.025027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:13.619 [2024-11-28 18:54:43.025038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.619 [2024-11-28 18:54:43.025379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.619 [2024-11-28 18:54:43.025397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:13.619 [2024-11-28 18:54:43.025464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:13.619 [2024-11-28 18:54:43.025488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:13.619 [2024-11-28 18:54:43.025567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:13.619 [2024-11-28 18:54:43.025577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.619 [2024-11-28 18:54:43.025787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:13.619 [2024-11-28 18:54:43.026213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:13.619 [2024-11-28 18:54:43.026231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:13.619 [2024-11-28 18:54:43.026464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.619 pt3 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.619 "name": "raid_bdev1", 00:14:13.619 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:13.619 "strip_size_kb": 64, 00:14:13.619 "state": "online", 00:14:13.619 "raid_level": "raid5f", 00:14:13.619 "superblock": true, 00:14:13.619 "num_base_bdevs": 3, 00:14:13.619 "num_base_bdevs_discovered": 2, 00:14:13.619 "num_base_bdevs_operational": 2, 00:14:13.619 "base_bdevs_list": [ 00:14:13.619 { 00:14:13.619 "name": null, 00:14:13.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.619 "is_configured": false, 00:14:13.619 "data_offset": 2048, 00:14:13.619 "data_size": 63488 00:14:13.619 }, 00:14:13.619 { 00:14:13.619 "name": "pt2", 00:14:13.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.619 "is_configured": true, 00:14:13.619 "data_offset": 2048, 00:14:13.619 "data_size": 63488 00:14:13.619 }, 00:14:13.619 { 00:14:13.619 "name": "pt3", 00:14:13.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.619 "is_configured": true, 00:14:13.619 "data_offset": 2048, 00:14:13.619 "data_size": 63488 00:14:13.619 } 00:14:13.619 ] 00:14:13.619 }' 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.619 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 [2024-11-28 18:54:43.501013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.190 [2024-11-28 18:54:43.501098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.190 [2024-11-28 18:54:43.501188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.190 [2024-11-28 18:54:43.501253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.190 [2024-11-28 18:54:43.501346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 [2024-11-28 18:54:43.573051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.190 [2024-11-28 18:54:43.573144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.190 [2024-11-28 18:54:43.573166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:14.190 [2024-11-28 18:54:43.573174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.190 [2024-11-28 18:54:43.575263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.190 [2024-11-28 18:54:43.575299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.190 [2024-11-28 18:54:43.575356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:14.190 [2024-11-28 18:54:43.575384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.190 [2024-11-28 18:54:43.575513] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:14.190 [2024-11-28 18:54:43.575525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.190 [2024-11-28 18:54:43.575541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:14:14.190 [2024-11-28 18:54:43.575575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.190 pt1 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.190 "name": "raid_bdev1", 00:14:14.190 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:14.190 "strip_size_kb": 64, 00:14:14.190 "state": "configuring", 00:14:14.190 "raid_level": "raid5f", 00:14:14.190 "superblock": true, 00:14:14.190 "num_base_bdevs": 3, 00:14:14.190 "num_base_bdevs_discovered": 1, 00:14:14.190 "num_base_bdevs_operational": 2, 00:14:14.190 "base_bdevs_list": [ 00:14:14.190 { 00:14:14.190 "name": null, 00:14:14.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.190 "is_configured": false, 00:14:14.190 "data_offset": 2048, 00:14:14.190 "data_size": 63488 00:14:14.190 }, 00:14:14.190 { 00:14:14.190 "name": "pt2", 00:14:14.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.190 "is_configured": true, 00:14:14.190 "data_offset": 2048, 00:14:14.190 "data_size": 63488 00:14:14.190 }, 00:14:14.190 { 00:14:14.190 "name": null, 00:14:14.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.190 "is_configured": false, 00:14:14.190 "data_offset": 2048, 00:14:14.190 "data_size": 63488 00:14:14.190 } 00:14:14.190 ] 00:14:14.190 }' 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.190 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.451 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:14.451 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.451 18:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.451 18:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:14.451 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.451 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:14.451 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.451 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.451 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.451 [2024-11-28 18:54:44.053196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.451 [2024-11-28 18:54:44.053299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.451 [2024-11-28 18:54:44.053334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:14.451 [2024-11-28 18:54:44.053363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.451 [2024-11-28 18:54:44.053837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.451 [2024-11-28 18:54:44.053903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.451 [2024-11-28 18:54:44.054003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:14.451 [2024-11-28 18:54:44.054057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.451 [2024-11-28 18:54:44.054182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:14.452 [2024-11-28 18:54:44.054225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:14.452 [2024-11-28 18:54:44.054516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:14.452 [2024-11-28 18:54:44.054972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:14.452 [2024-11-28 18:54:44.055035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:14.452 [2024-11-28 18:54:44.055223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.712 pt3 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.712 "name": "raid_bdev1", 00:14:14.712 "uuid": "7c58b386-cce7-49d0-8888-e19bda9c5ac3", 00:14:14.712 "strip_size_kb": 64, 00:14:14.712 "state": "online", 00:14:14.712 "raid_level": "raid5f", 00:14:14.712 "superblock": true, 00:14:14.712 "num_base_bdevs": 3, 00:14:14.712 "num_base_bdevs_discovered": 2, 00:14:14.712 "num_base_bdevs_operational": 2, 00:14:14.712 "base_bdevs_list": [ 00:14:14.712 { 00:14:14.712 "name": null, 00:14:14.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.712 "is_configured": false, 00:14:14.712 "data_offset": 2048, 00:14:14.712 "data_size": 63488 00:14:14.712 }, 00:14:14.712 { 00:14:14.712 "name": "pt2", 00:14:14.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.712 "is_configured": true, 00:14:14.712 "data_offset": 2048, 00:14:14.712 "data_size": 63488 00:14:14.712 }, 00:14:14.712 { 00:14:14.712 "name": "pt3", 00:14:14.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.712 "is_configured": true, 00:14:14.712 "data_offset": 2048, 00:14:14.712 "data_size": 63488 00:14:14.712 } 00:14:14.712 ] 00:14:14.712 }' 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.712 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.973 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.973 [2024-11-28 18:54:44.561517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.233 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.233 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7c58b386-cce7-49d0-8888-e19bda9c5ac3 '!=' 7c58b386-cce7-49d0-8888-e19bda9c5ac3 ']' 00:14:15.233 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93148 00:14:15.233 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93148 ']' 00:14:15.233 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93148 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93148 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.234 killing process with pid 93148 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93148' 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93148 00:14:15.234 [2024-11-28 18:54:44.628765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.234 [2024-11-28 18:54:44.628829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.234 [2024-11-28 18:54:44.628880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.234 [2024-11-28 18:54:44.628891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:15.234 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93148 00:14:15.234 [2024-11-28 18:54:44.661705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.494 18:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:15.494 00:14:15.494 real 0m6.673s 00:14:15.494 user 0m11.181s 00:14:15.494 sys 0m1.491s 00:14:15.494 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.494 ************************************ 00:14:15.494 END TEST raid5f_superblock_test 00:14:15.494 ************************************ 00:14:15.494 18:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 18:54:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:15.494 18:54:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:15.494 18:54:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:15.494 18:54:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.494 18:54:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 ************************************ 00:14:15.494 START TEST raid5f_rebuild_test 00:14:15.494 ************************************ 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93582 00:14:15.494 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93582 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 93582 ']' 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.495 18:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.495 [2024-11-28 18:54:45.076467] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:15.495 [2024-11-28 18:54:45.076653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.495 Zero copy mechanism will not be used. 00:14:15.495 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93582 ] 00:14:15.754 [2024-11-28 18:54:45.216387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:15.754 [2024-11-28 18:54:45.252527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.754 [2024-11-28 18:54:45.279233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.754 [2024-11-28 18:54:45.323042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.754 [2024-11-28 18:54:45.323159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.323 BaseBdev1_malloc 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.323 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.324 [2024-11-28 18:54:45.896451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.324 [2024-11-28 18:54:45.896512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.324 [2024-11-28 18:54:45.896537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.324 [2024-11-28 18:54:45.896551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.324 [2024-11-28 18:54:45.898604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.324 [2024-11-28 18:54:45.898644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.324 BaseBdev1 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.324 BaseBdev2_malloc 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.324 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.324 [2024-11-28 18:54:45.925231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.324 [2024-11-28 18:54:45.925291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.324 [2024-11-28 18:54:45.925310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.324 [2024-11-28 18:54:45.925320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.324 [2024-11-28 18:54:45.927425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.324 [2024-11-28 18:54:45.927473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.585 BaseBdev2 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 BaseBdev3_malloc 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 [2024-11-28 18:54:45.953906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:16.585 [2024-11-28 18:54:45.954030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.585 [2024-11-28 18:54:45.954077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:16.585 [2024-11-28 18:54:45.954122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.585 [2024-11-28 18:54:45.956141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.585 [2024-11-28 18:54:45.956220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:16.585 BaseBdev3 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 spare_malloc 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 spare_delay 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 [2024-11-28 18:54:46.008988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.585 [2024-11-28 18:54:46.009120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.585 [2024-11-28 18:54:46.009145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:16.585 [2024-11-28 18:54:46.009159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.585 [2024-11-28 18:54:46.011848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.585 [2024-11-28 18:54:46.011898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.585 spare 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 [2024-11-28 18:54:46.021011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.585 [2024-11-28 18:54:46.022871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.585 [2024-11-28 18:54:46.022986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.585 [2024-11-28 18:54:46.023069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:16.585 [2024-11-28 18:54:46.023078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:16.585 [2024-11-28 18:54:46.023325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:16.585 [2024-11-28 18:54:46.023739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:16.585 [2024-11-28 18:54:46.023763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:16.585 [2024-11-28 18:54:46.023892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.585 "name": "raid_bdev1", 00:14:16.585 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:16.585 "strip_size_kb": 64, 00:14:16.585 "state": "online", 00:14:16.585 "raid_level": "raid5f", 00:14:16.585 "superblock": false, 00:14:16.585 "num_base_bdevs": 3, 00:14:16.585 "num_base_bdevs_discovered": 3, 00:14:16.585 "num_base_bdevs_operational": 3, 00:14:16.585 "base_bdevs_list": [ 00:14:16.585 { 00:14:16.585 "name": "BaseBdev1", 00:14:16.585 "uuid": "759992fb-b7e9-5c1a-b31e-9fe767c8bf46", 00:14:16.585 "is_configured": true, 00:14:16.585 "data_offset": 0, 00:14:16.585 "data_size": 65536 00:14:16.585 }, 00:14:16.585 { 00:14:16.585 "name": "BaseBdev2", 00:14:16.585 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:16.585 "is_configured": true, 00:14:16.585 "data_offset": 0, 00:14:16.585 "data_size": 65536 00:14:16.585 }, 00:14:16.585 { 00:14:16.585 "name": "BaseBdev3", 00:14:16.585 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:16.585 "is_configured": true, 00:14:16.585 "data_offset": 0, 00:14:16.585 "data_size": 65536 00:14:16.585 } 00:14:16.585 ] 00:14:16.585 }' 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.585 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.846 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:16.846 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.846 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.846 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.846 [2024-11-28 18:54:46.449591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.106 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:17.106 [2024-11-28 18:54:46.681535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:17.106 /dev/nbd0 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.366 1+0 records in 00:14:17.366 1+0 records out 00:14:17.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466935 s, 8.8 MB/s 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:17.366 18:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:17.626 512+0 records in 00:14:17.626 512+0 records out 00:14:17.626 67108864 bytes (67 MB, 64 MiB) copied, 0.298102 s, 225 MB/s 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.626 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.887 [2024-11-28 18:54:47.253998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.887 [2024-11-28 18:54:47.286068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.887 "name": "raid_bdev1", 00:14:17.887 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:17.887 "strip_size_kb": 64, 00:14:17.887 "state": "online", 00:14:17.887 "raid_level": "raid5f", 00:14:17.887 "superblock": false, 00:14:17.887 "num_base_bdevs": 3, 00:14:17.887 "num_base_bdevs_discovered": 2, 00:14:17.887 "num_base_bdevs_operational": 2, 00:14:17.887 "base_bdevs_list": [ 00:14:17.887 { 00:14:17.887 "name": null, 00:14:17.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.887 "is_configured": false, 00:14:17.887 "data_offset": 0, 00:14:17.887 "data_size": 65536 00:14:17.887 }, 00:14:17.887 { 00:14:17.887 "name": "BaseBdev2", 00:14:17.887 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:17.887 "is_configured": true, 00:14:17.887 "data_offset": 0, 00:14:17.887 "data_size": 65536 00:14:17.887 }, 00:14:17.887 { 00:14:17.887 "name": "BaseBdev3", 00:14:17.887 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:17.887 "is_configured": true, 00:14:17.887 "data_offset": 0, 00:14:17.887 "data_size": 65536 00:14:17.887 } 00:14:17.887 ] 00:14:17.887 }' 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.887 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.147 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.147 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.147 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.147 [2024-11-28 18:54:47.726238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.147 [2024-11-28 18:54:47.730983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:14:18.147 18:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.147 18:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:18.147 [2024-11-28 18:54:47.733139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.530 "name": "raid_bdev1", 00:14:19.530 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:19.530 "strip_size_kb": 64, 00:14:19.530 "state": "online", 00:14:19.530 "raid_level": "raid5f", 00:14:19.530 "superblock": false, 00:14:19.530 "num_base_bdevs": 3, 00:14:19.530 "num_base_bdevs_discovered": 3, 00:14:19.530 "num_base_bdevs_operational": 3, 00:14:19.530 "process": { 00:14:19.530 "type": "rebuild", 00:14:19.530 "target": "spare", 00:14:19.530 "progress": { 00:14:19.530 "blocks": 20480, 00:14:19.530 "percent": 15 00:14:19.530 } 00:14:19.530 }, 00:14:19.530 "base_bdevs_list": [ 00:14:19.530 { 00:14:19.530 "name": "spare", 00:14:19.530 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 }, 00:14:19.530 { 00:14:19.530 "name": "BaseBdev2", 00:14:19.530 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 }, 00:14:19.530 { 00:14:19.530 "name": "BaseBdev3", 00:14:19.530 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 } 00:14:19.530 ] 00:14:19.530 }' 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.530 [2024-11-28 18:54:48.867383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.530 [2024-11-28 18:54:48.942196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.530 [2024-11-28 18:54:48.942317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.530 [2024-11-28 18:54:48.942358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.530 [2024-11-28 18:54:48.942380] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.530 18:54:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.530 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.530 "name": "raid_bdev1", 00:14:19.530 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:19.530 "strip_size_kb": 64, 00:14:19.530 "state": "online", 00:14:19.530 "raid_level": "raid5f", 00:14:19.530 "superblock": false, 00:14:19.530 "num_base_bdevs": 3, 00:14:19.530 "num_base_bdevs_discovered": 2, 00:14:19.530 "num_base_bdevs_operational": 2, 00:14:19.530 "base_bdevs_list": [ 00:14:19.530 { 00:14:19.530 "name": null, 00:14:19.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.530 "is_configured": false, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 }, 00:14:19.530 { 00:14:19.530 "name": "BaseBdev2", 00:14:19.530 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 }, 00:14:19.530 { 00:14:19.530 "name": "BaseBdev3", 00:14:19.530 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 } 00:14:19.530 ] 00:14:19.530 }' 00:14:19.530 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.530 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.790 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.050 "name": "raid_bdev1", 00:14:20.050 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:20.050 "strip_size_kb": 64, 00:14:20.050 "state": "online", 00:14:20.050 "raid_level": "raid5f", 00:14:20.050 "superblock": false, 00:14:20.050 "num_base_bdevs": 3, 00:14:20.050 "num_base_bdevs_discovered": 2, 00:14:20.050 "num_base_bdevs_operational": 2, 00:14:20.050 "base_bdevs_list": [ 00:14:20.050 { 00:14:20.050 "name": null, 00:14:20.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.050 "is_configured": false, 00:14:20.050 "data_offset": 0, 00:14:20.050 "data_size": 65536 00:14:20.050 }, 00:14:20.050 { 00:14:20.050 "name": "BaseBdev2", 00:14:20.050 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:20.050 "is_configured": true, 00:14:20.050 "data_offset": 0, 00:14:20.050 "data_size": 65536 00:14:20.050 }, 00:14:20.050 { 00:14:20.050 "name": "BaseBdev3", 00:14:20.050 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:20.050 "is_configured": true, 00:14:20.050 "data_offset": 0, 00:14:20.050 "data_size": 65536 00:14:20.050 } 00:14:20.050 ] 00:14:20.050 }' 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.050 [2024-11-28 18:54:49.524450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.050 [2024-11-28 18:54:49.528796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.050 18:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:20.050 [2024-11-28 18:54:49.530929] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.990 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.990 "name": "raid_bdev1", 00:14:20.990 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:20.990 "strip_size_kb": 64, 00:14:20.990 "state": "online", 00:14:20.990 "raid_level": "raid5f", 00:14:20.990 "superblock": false, 00:14:20.990 "num_base_bdevs": 3, 00:14:20.990 "num_base_bdevs_discovered": 3, 00:14:20.990 "num_base_bdevs_operational": 3, 00:14:20.990 "process": { 00:14:20.990 "type": "rebuild", 00:14:20.990 "target": "spare", 00:14:20.990 "progress": { 00:14:20.990 "blocks": 20480, 00:14:20.990 "percent": 15 00:14:20.990 } 00:14:20.990 }, 00:14:20.990 "base_bdevs_list": [ 00:14:20.991 { 00:14:20.991 "name": "spare", 00:14:20.991 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:20.991 "is_configured": true, 00:14:20.991 "data_offset": 0, 00:14:20.991 "data_size": 65536 00:14:20.991 }, 00:14:20.991 { 00:14:20.991 "name": "BaseBdev2", 00:14:20.991 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:20.991 "is_configured": true, 00:14:20.991 "data_offset": 0, 00:14:20.991 "data_size": 65536 00:14:20.991 }, 00:14:20.991 { 00:14:20.991 "name": "BaseBdev3", 00:14:20.991 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:20.991 "is_configured": true, 00:14:20.991 "data_offset": 0, 00:14:20.991 "data_size": 65536 00:14:20.991 } 00:14:20.991 ] 00:14:20.991 }' 00:14:20.991 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.251 "name": "raid_bdev1", 00:14:21.251 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:21.251 "strip_size_kb": 64, 00:14:21.251 "state": "online", 00:14:21.251 "raid_level": "raid5f", 00:14:21.251 "superblock": false, 00:14:21.251 "num_base_bdevs": 3, 00:14:21.251 "num_base_bdevs_discovered": 3, 00:14:21.251 "num_base_bdevs_operational": 3, 00:14:21.251 "process": { 00:14:21.251 "type": "rebuild", 00:14:21.251 "target": "spare", 00:14:21.251 "progress": { 00:14:21.251 "blocks": 22528, 00:14:21.251 "percent": 17 00:14:21.251 } 00:14:21.251 }, 00:14:21.251 "base_bdevs_list": [ 00:14:21.251 { 00:14:21.251 "name": "spare", 00:14:21.251 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:21.251 "is_configured": true, 00:14:21.251 "data_offset": 0, 00:14:21.251 "data_size": 65536 00:14:21.251 }, 00:14:21.251 { 00:14:21.251 "name": "BaseBdev2", 00:14:21.251 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:21.251 "is_configured": true, 00:14:21.251 "data_offset": 0, 00:14:21.251 "data_size": 65536 00:14:21.251 }, 00:14:21.251 { 00:14:21.251 "name": "BaseBdev3", 00:14:21.251 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:21.251 "is_configured": true, 00:14:21.251 "data_offset": 0, 00:14:21.251 "data_size": 65536 00:14:21.251 } 00:14:21.251 ] 00:14:21.251 }' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.251 18:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.633 "name": "raid_bdev1", 00:14:22.633 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:22.633 "strip_size_kb": 64, 00:14:22.633 "state": "online", 00:14:22.633 "raid_level": "raid5f", 00:14:22.633 "superblock": false, 00:14:22.633 "num_base_bdevs": 3, 00:14:22.633 "num_base_bdevs_discovered": 3, 00:14:22.633 "num_base_bdevs_operational": 3, 00:14:22.633 "process": { 00:14:22.633 "type": "rebuild", 00:14:22.633 "target": "spare", 00:14:22.633 "progress": { 00:14:22.633 "blocks": 47104, 00:14:22.633 "percent": 35 00:14:22.633 } 00:14:22.633 }, 00:14:22.633 "base_bdevs_list": [ 00:14:22.633 { 00:14:22.633 "name": "spare", 00:14:22.633 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 }, 00:14:22.633 { 00:14:22.633 "name": "BaseBdev2", 00:14:22.633 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 }, 00:14:22.633 { 00:14:22.633 "name": "BaseBdev3", 00:14:22.633 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 } 00:14:22.633 ] 00:14:22.633 }' 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.633 18:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.579 18:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.579 "name": "raid_bdev1", 00:14:23.579 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:23.579 "strip_size_kb": 64, 00:14:23.579 "state": "online", 00:14:23.579 "raid_level": "raid5f", 00:14:23.579 "superblock": false, 00:14:23.579 "num_base_bdevs": 3, 00:14:23.579 "num_base_bdevs_discovered": 3, 00:14:23.579 "num_base_bdevs_operational": 3, 00:14:23.579 "process": { 00:14:23.579 "type": "rebuild", 00:14:23.579 "target": "spare", 00:14:23.579 "progress": { 00:14:23.579 "blocks": 69632, 00:14:23.579 "percent": 53 00:14:23.579 } 00:14:23.579 }, 00:14:23.579 "base_bdevs_list": [ 00:14:23.579 { 00:14:23.579 "name": "spare", 00:14:23.579 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:23.579 "is_configured": true, 00:14:23.579 "data_offset": 0, 00:14:23.579 "data_size": 65536 00:14:23.579 }, 00:14:23.579 { 00:14:23.579 "name": "BaseBdev2", 00:14:23.579 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:23.579 "is_configured": true, 00:14:23.579 "data_offset": 0, 00:14:23.579 "data_size": 65536 00:14:23.579 }, 00:14:23.579 { 00:14:23.579 "name": "BaseBdev3", 00:14:23.579 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:23.579 "is_configured": true, 00:14:23.579 "data_offset": 0, 00:14:23.579 "data_size": 65536 00:14:23.579 } 00:14:23.579 ] 00:14:23.579 }' 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.579 18:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.962 "name": "raid_bdev1", 00:14:24.962 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:24.962 "strip_size_kb": 64, 00:14:24.962 "state": "online", 00:14:24.962 "raid_level": "raid5f", 00:14:24.962 "superblock": false, 00:14:24.962 "num_base_bdevs": 3, 00:14:24.962 "num_base_bdevs_discovered": 3, 00:14:24.962 "num_base_bdevs_operational": 3, 00:14:24.962 "process": { 00:14:24.962 "type": "rebuild", 00:14:24.962 "target": "spare", 00:14:24.962 "progress": { 00:14:24.962 "blocks": 94208, 00:14:24.962 "percent": 71 00:14:24.962 } 00:14:24.962 }, 00:14:24.962 "base_bdevs_list": [ 00:14:24.962 { 00:14:24.962 "name": "spare", 00:14:24.962 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:24.962 "is_configured": true, 00:14:24.962 "data_offset": 0, 00:14:24.962 "data_size": 65536 00:14:24.962 }, 00:14:24.962 { 00:14:24.962 "name": "BaseBdev2", 00:14:24.962 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:24.962 "is_configured": true, 00:14:24.962 "data_offset": 0, 00:14:24.962 "data_size": 65536 00:14:24.962 }, 00:14:24.962 { 00:14:24.962 "name": "BaseBdev3", 00:14:24.962 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:24.962 "is_configured": true, 00:14:24.962 "data_offset": 0, 00:14:24.962 "data_size": 65536 00:14:24.962 } 00:14:24.962 ] 00:14:24.962 }' 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.962 18:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.902 "name": "raid_bdev1", 00:14:25.902 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:25.902 "strip_size_kb": 64, 00:14:25.902 "state": "online", 00:14:25.902 "raid_level": "raid5f", 00:14:25.902 "superblock": false, 00:14:25.902 "num_base_bdevs": 3, 00:14:25.902 "num_base_bdevs_discovered": 3, 00:14:25.902 "num_base_bdevs_operational": 3, 00:14:25.902 "process": { 00:14:25.902 "type": "rebuild", 00:14:25.902 "target": "spare", 00:14:25.902 "progress": { 00:14:25.902 "blocks": 116736, 00:14:25.902 "percent": 89 00:14:25.902 } 00:14:25.902 }, 00:14:25.902 "base_bdevs_list": [ 00:14:25.902 { 00:14:25.902 "name": "spare", 00:14:25.902 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:25.902 "is_configured": true, 00:14:25.902 "data_offset": 0, 00:14:25.902 "data_size": 65536 00:14:25.902 }, 00:14:25.902 { 00:14:25.902 "name": "BaseBdev2", 00:14:25.902 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:25.902 "is_configured": true, 00:14:25.902 "data_offset": 0, 00:14:25.902 "data_size": 65536 00:14:25.902 }, 00:14:25.902 { 00:14:25.902 "name": "BaseBdev3", 00:14:25.902 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:25.902 "is_configured": true, 00:14:25.902 "data_offset": 0, 00:14:25.902 "data_size": 65536 00:14:25.902 } 00:14:25.902 ] 00:14:25.902 }' 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.902 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.903 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.903 18:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.472 [2024-11-28 18:54:55.976121] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:26.472 [2024-11-28 18:54:55.976243] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.472 [2024-11-28 18:54:55.976306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.042 "name": "raid_bdev1", 00:14:27.042 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:27.042 "strip_size_kb": 64, 00:14:27.042 "state": "online", 00:14:27.042 "raid_level": "raid5f", 00:14:27.042 "superblock": false, 00:14:27.042 "num_base_bdevs": 3, 00:14:27.042 "num_base_bdevs_discovered": 3, 00:14:27.042 "num_base_bdevs_operational": 3, 00:14:27.042 "base_bdevs_list": [ 00:14:27.042 { 00:14:27.042 "name": "spare", 00:14:27.042 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 }, 00:14:27.042 { 00:14:27.042 "name": "BaseBdev2", 00:14:27.042 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 }, 00:14:27.042 { 00:14:27.042 "name": "BaseBdev3", 00:14:27.042 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 } 00:14:27.042 ] 00:14:27.042 }' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.042 "name": "raid_bdev1", 00:14:27.042 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:27.042 "strip_size_kb": 64, 00:14:27.042 "state": "online", 00:14:27.042 "raid_level": "raid5f", 00:14:27.042 "superblock": false, 00:14:27.042 "num_base_bdevs": 3, 00:14:27.042 "num_base_bdevs_discovered": 3, 00:14:27.042 "num_base_bdevs_operational": 3, 00:14:27.042 "base_bdevs_list": [ 00:14:27.042 { 00:14:27.042 "name": "spare", 00:14:27.042 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 }, 00:14:27.042 { 00:14:27.042 "name": "BaseBdev2", 00:14:27.042 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 }, 00:14:27.042 { 00:14:27.042 "name": "BaseBdev3", 00:14:27.042 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:27.042 "is_configured": true, 00:14:27.042 "data_offset": 0, 00:14:27.042 "data_size": 65536 00:14:27.042 } 00:14:27.042 ] 00:14:27.042 }' 00:14:27.042 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.302 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.303 "name": "raid_bdev1", 00:14:27.303 "uuid": "d023fb8e-4be9-4881-9a4a-537fbb194517", 00:14:27.303 "strip_size_kb": 64, 00:14:27.303 "state": "online", 00:14:27.303 "raid_level": "raid5f", 00:14:27.303 "superblock": false, 00:14:27.303 "num_base_bdevs": 3, 00:14:27.303 "num_base_bdevs_discovered": 3, 00:14:27.303 "num_base_bdevs_operational": 3, 00:14:27.303 "base_bdevs_list": [ 00:14:27.303 { 00:14:27.303 "name": "spare", 00:14:27.303 "uuid": "a8c01d75-5f4e-5043-9811-ce2d68223d7e", 00:14:27.303 "is_configured": true, 00:14:27.303 "data_offset": 0, 00:14:27.303 "data_size": 65536 00:14:27.303 }, 00:14:27.303 { 00:14:27.303 "name": "BaseBdev2", 00:14:27.303 "uuid": "cc0d17b1-9d30-51a7-b3ae-a8a218f904f0", 00:14:27.303 "is_configured": true, 00:14:27.303 "data_offset": 0, 00:14:27.303 "data_size": 65536 00:14:27.303 }, 00:14:27.303 { 00:14:27.303 "name": "BaseBdev3", 00:14:27.303 "uuid": "aac48081-80c3-5906-8e94-756d52b29502", 00:14:27.303 "is_configured": true, 00:14:27.303 "data_offset": 0, 00:14:27.303 "data_size": 65536 00:14:27.303 } 00:14:27.303 ] 00:14:27.303 }' 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.303 18:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.563 [2024-11-28 18:54:57.117979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.563 [2024-11-28 18:54:57.118010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.563 [2024-11-28 18:54:57.118089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.563 [2024-11-28 18:54:57.118168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.563 [2024-11-28 18:54:57.118181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.563 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:27.823 /dev/nbd0 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.823 1+0 records in 00:14:27.823 1+0 records out 00:14:27.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584763 s, 7.0 MB/s 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.823 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:28.084 /dev/nbd1 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.084 1+0 records in 00:14:28.084 1+0 records out 00:14:28.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029929 s, 13.7 MB/s 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.084 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.344 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.604 18:54:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.864 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93582 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 93582 ']' 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 93582 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93582 00:14:28.865 killing process with pid 93582 00:14:28.865 Received shutdown signal, test time was about 60.000000 seconds 00:14:28.865 00:14:28.865 Latency(us) 00:14:28.865 [2024-11-28T18:54:58.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.865 [2024-11-28T18:54:58.471Z] =================================================================================================================== 00:14:28.865 [2024-11-28T18:54:58.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93582' 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 93582 00:14:28.865 [2024-11-28 18:54:58.300485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.865 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 93582 00:14:28.865 [2024-11-28 18:54:58.339751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:29.125 00:14:29.125 real 0m13.576s 00:14:29.125 user 0m16.882s 00:14:29.125 sys 0m2.059s 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.125 ************************************ 00:14:29.125 END TEST raid5f_rebuild_test 00:14:29.125 ************************************ 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.125 18:54:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:29.125 18:54:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.125 18:54:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.125 18:54:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.125 ************************************ 00:14:29.125 START TEST raid5f_rebuild_test_sb 00:14:29.125 ************************************ 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.125 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94000 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94000 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94000 ']' 00:14:29.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.126 18:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.386 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.386 Zero copy mechanism will not be used. 00:14:29.386 [2024-11-28 18:54:58.734787] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:29.386 [2024-11-28 18:54:58.734912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94000 ] 00:14:29.386 [2024-11-28 18:54:58.874630] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:29.386 [2024-11-28 18:54:58.913490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.386 [2024-11-28 18:54:58.940204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.386 [2024-11-28 18:54:58.983640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.386 [2024-11-28 18:54:58.983684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.957 BaseBdev1_malloc 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.957 [2024-11-28 18:54:59.552896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.957 [2024-11-28 18:54:59.552969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.957 [2024-11-28 18:54:59.552993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.957 [2024-11-28 18:54:59.553006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.957 [2024-11-28 18:54:59.555128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.957 [2024-11-28 18:54:59.555166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.957 BaseBdev1 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.957 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 BaseBdev2_malloc 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 [2024-11-28 18:54:59.581633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:30.218 [2024-11-28 18:54:59.581685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.218 [2024-11-28 18:54:59.581718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.218 [2024-11-28 18:54:59.581728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.218 [2024-11-28 18:54:59.583699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.218 [2024-11-28 18:54:59.583733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.218 BaseBdev2 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 BaseBdev3_malloc 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 [2024-11-28 18:54:59.610190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:30.218 [2024-11-28 18:54:59.610245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.218 [2024-11-28 18:54:59.610264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.218 [2024-11-28 18:54:59.610273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.218 [2024-11-28 18:54:59.612288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.218 [2024-11-28 18:54:59.612338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.218 BaseBdev3 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 spare_malloc 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 spare_delay 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 [2024-11-28 18:54:59.666744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.218 [2024-11-28 18:54:59.666808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.218 [2024-11-28 18:54:59.666827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:30.218 [2024-11-28 18:54:59.666839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.218 [2024-11-28 18:54:59.669201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.218 [2024-11-28 18:54:59.669247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.218 spare 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.218 [2024-11-28 18:54:59.678792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.218 [2024-11-28 18:54:59.680589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.218 [2024-11-28 18:54:59.680650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.218 [2024-11-28 18:54:59.680805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:30.218 [2024-11-28 18:54:59.680817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:30.218 [2024-11-28 18:54:59.681058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:30.218 [2024-11-28 18:54:59.681481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:30.218 [2024-11-28 18:54:59.681495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:30.218 [2024-11-28 18:54:59.681603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.218 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.219 "name": "raid_bdev1", 00:14:30.219 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:30.219 "strip_size_kb": 64, 00:14:30.219 "state": "online", 00:14:30.219 "raid_level": "raid5f", 00:14:30.219 "superblock": true, 00:14:30.219 "num_base_bdevs": 3, 00:14:30.219 "num_base_bdevs_discovered": 3, 00:14:30.219 "num_base_bdevs_operational": 3, 00:14:30.219 "base_bdevs_list": [ 00:14:30.219 { 00:14:30.219 "name": "BaseBdev1", 00:14:30.219 "uuid": "c0abe419-17d0-5362-b3f4-d76070f77e75", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "name": "BaseBdev2", 00:14:30.219 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "name": "BaseBdev3", 00:14:30.219 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }' 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.219 18:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:30.789 [2024-11-28 18:55:00.131361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.789 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:31.049 [2024-11-28 18:55:00.419403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:31.049 /dev/nbd0 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.049 1+0 records in 00:14:31.049 1+0 records out 00:14:31.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548838 s, 7.5 MB/s 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:31.049 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:31.308 496+0 records in 00:14:31.308 496+0 records out 00:14:31.308 65011712 bytes (65 MB, 62 MiB) copied, 0.301846 s, 215 MB/s 00:14:31.308 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.308 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.308 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.308 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.309 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.309 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.309 18:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:31.568 [2024-11-28 18:55:01.012570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.568 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:31.568 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:31.568 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:31.568 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.569 [2024-11-28 18:55:01.044643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.569 "name": "raid_bdev1", 00:14:31.569 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:31.569 "strip_size_kb": 64, 00:14:31.569 "state": "online", 00:14:31.569 "raid_level": "raid5f", 00:14:31.569 "superblock": true, 00:14:31.569 "num_base_bdevs": 3, 00:14:31.569 "num_base_bdevs_discovered": 2, 00:14:31.569 "num_base_bdevs_operational": 2, 00:14:31.569 "base_bdevs_list": [ 00:14:31.569 { 00:14:31.569 "name": null, 00:14:31.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.569 "is_configured": false, 00:14:31.569 "data_offset": 0, 00:14:31.569 "data_size": 63488 00:14:31.569 }, 00:14:31.569 { 00:14:31.569 "name": "BaseBdev2", 00:14:31.569 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:31.569 "is_configured": true, 00:14:31.569 "data_offset": 2048, 00:14:31.569 "data_size": 63488 00:14:31.569 }, 00:14:31.569 { 00:14:31.569 "name": "BaseBdev3", 00:14:31.569 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:31.569 "is_configured": true, 00:14:31.569 "data_offset": 2048, 00:14:31.569 "data_size": 63488 00:14:31.569 } 00:14:31.569 ] 00:14:31.569 }' 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.569 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.138 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.138 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.138 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.138 [2024-11-28 18:55:01.544803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.138 [2024-11-28 18:55:01.549528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:14:32.138 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.138 18:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:32.138 [2024-11-28 18:55:01.551731] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.089 "name": "raid_bdev1", 00:14:33.089 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:33.089 "strip_size_kb": 64, 00:14:33.089 "state": "online", 00:14:33.089 "raid_level": "raid5f", 00:14:33.089 "superblock": true, 00:14:33.089 "num_base_bdevs": 3, 00:14:33.089 "num_base_bdevs_discovered": 3, 00:14:33.089 "num_base_bdevs_operational": 3, 00:14:33.089 "process": { 00:14:33.089 "type": "rebuild", 00:14:33.089 "target": "spare", 00:14:33.089 "progress": { 00:14:33.089 "blocks": 20480, 00:14:33.089 "percent": 16 00:14:33.089 } 00:14:33.089 }, 00:14:33.089 "base_bdevs_list": [ 00:14:33.089 { 00:14:33.089 "name": "spare", 00:14:33.089 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:33.089 "is_configured": true, 00:14:33.089 "data_offset": 2048, 00:14:33.089 "data_size": 63488 00:14:33.089 }, 00:14:33.089 { 00:14:33.089 "name": "BaseBdev2", 00:14:33.089 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:33.089 "is_configured": true, 00:14:33.089 "data_offset": 2048, 00:14:33.089 "data_size": 63488 00:14:33.089 }, 00:14:33.089 { 00:14:33.089 "name": "BaseBdev3", 00:14:33.089 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:33.089 "is_configured": true, 00:14:33.089 "data_offset": 2048, 00:14:33.089 "data_size": 63488 00:14:33.089 } 00:14:33.089 ] 00:14:33.089 }' 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.089 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.349 [2024-11-28 18:55:02.713883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.349 [2024-11-28 18:55:02.760819] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.349 [2024-11-28 18:55:02.760884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.349 [2024-11-28 18:55:02.760901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.349 [2024-11-28 18:55:02.760909] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.349 "name": "raid_bdev1", 00:14:33.349 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:33.349 "strip_size_kb": 64, 00:14:33.349 "state": "online", 00:14:33.349 "raid_level": "raid5f", 00:14:33.349 "superblock": true, 00:14:33.349 "num_base_bdevs": 3, 00:14:33.349 "num_base_bdevs_discovered": 2, 00:14:33.349 "num_base_bdevs_operational": 2, 00:14:33.349 "base_bdevs_list": [ 00:14:33.349 { 00:14:33.349 "name": null, 00:14:33.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.349 "is_configured": false, 00:14:33.349 "data_offset": 0, 00:14:33.349 "data_size": 63488 00:14:33.349 }, 00:14:33.349 { 00:14:33.349 "name": "BaseBdev2", 00:14:33.349 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:33.349 "is_configured": true, 00:14:33.349 "data_offset": 2048, 00:14:33.349 "data_size": 63488 00:14:33.349 }, 00:14:33.349 { 00:14:33.349 "name": "BaseBdev3", 00:14:33.349 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:33.349 "is_configured": true, 00:14:33.349 "data_offset": 2048, 00:14:33.349 "data_size": 63488 00:14:33.349 } 00:14:33.349 ] 00:14:33.349 }' 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.349 18:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.920 "name": "raid_bdev1", 00:14:33.920 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:33.920 "strip_size_kb": 64, 00:14:33.920 "state": "online", 00:14:33.920 "raid_level": "raid5f", 00:14:33.920 "superblock": true, 00:14:33.920 "num_base_bdevs": 3, 00:14:33.920 "num_base_bdevs_discovered": 2, 00:14:33.920 "num_base_bdevs_operational": 2, 00:14:33.920 "base_bdevs_list": [ 00:14:33.920 { 00:14:33.920 "name": null, 00:14:33.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.920 "is_configured": false, 00:14:33.920 "data_offset": 0, 00:14:33.920 "data_size": 63488 00:14:33.920 }, 00:14:33.920 { 00:14:33.920 "name": "BaseBdev2", 00:14:33.920 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:33.920 "is_configured": true, 00:14:33.920 "data_offset": 2048, 00:14:33.920 "data_size": 63488 00:14:33.920 }, 00:14:33.920 { 00:14:33.920 "name": "BaseBdev3", 00:14:33.920 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:33.920 "is_configured": true, 00:14:33.920 "data_offset": 2048, 00:14:33.920 "data_size": 63488 00:14:33.920 } 00:14:33.920 ] 00:14:33.920 }' 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.920 [2024-11-28 18:55:03.415039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.920 [2024-11-28 18:55:03.419308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.920 18:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.920 [2024-11-28 18:55:03.421398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.859 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.120 "name": "raid_bdev1", 00:14:35.120 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:35.120 "strip_size_kb": 64, 00:14:35.120 "state": "online", 00:14:35.120 "raid_level": "raid5f", 00:14:35.120 "superblock": true, 00:14:35.120 "num_base_bdevs": 3, 00:14:35.120 "num_base_bdevs_discovered": 3, 00:14:35.120 "num_base_bdevs_operational": 3, 00:14:35.120 "process": { 00:14:35.120 "type": "rebuild", 00:14:35.120 "target": "spare", 00:14:35.120 "progress": { 00:14:35.120 "blocks": 20480, 00:14:35.120 "percent": 16 00:14:35.120 } 00:14:35.120 }, 00:14:35.120 "base_bdevs_list": [ 00:14:35.120 { 00:14:35.120 "name": "spare", 00:14:35.120 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev2", 00:14:35.120 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev3", 00:14:35.120 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 } 00:14:35.120 ] 00:14:35.120 }' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:35.120 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.120 "name": "raid_bdev1", 00:14:35.120 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:35.120 "strip_size_kb": 64, 00:14:35.120 "state": "online", 00:14:35.120 "raid_level": "raid5f", 00:14:35.120 "superblock": true, 00:14:35.120 "num_base_bdevs": 3, 00:14:35.120 "num_base_bdevs_discovered": 3, 00:14:35.120 "num_base_bdevs_operational": 3, 00:14:35.120 "process": { 00:14:35.120 "type": "rebuild", 00:14:35.120 "target": "spare", 00:14:35.120 "progress": { 00:14:35.120 "blocks": 22528, 00:14:35.120 "percent": 17 00:14:35.120 } 00:14:35.120 }, 00:14:35.120 "base_bdevs_list": [ 00:14:35.120 { 00:14:35.120 "name": "spare", 00:14:35.120 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev2", 00:14:35.120 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev3", 00:14:35.120 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 2048, 00:14:35.120 "data_size": 63488 00:14:35.120 } 00:14:35.120 ] 00:14:35.120 }' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.120 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.121 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.380 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.380 18:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.317 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.317 "name": "raid_bdev1", 00:14:36.317 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:36.317 "strip_size_kb": 64, 00:14:36.317 "state": "online", 00:14:36.317 "raid_level": "raid5f", 00:14:36.317 "superblock": true, 00:14:36.317 "num_base_bdevs": 3, 00:14:36.317 "num_base_bdevs_discovered": 3, 00:14:36.317 "num_base_bdevs_operational": 3, 00:14:36.317 "process": { 00:14:36.317 "type": "rebuild", 00:14:36.317 "target": "spare", 00:14:36.317 "progress": { 00:14:36.317 "blocks": 47104, 00:14:36.317 "percent": 37 00:14:36.317 } 00:14:36.317 }, 00:14:36.317 "base_bdevs_list": [ 00:14:36.317 { 00:14:36.317 "name": "spare", 00:14:36.317 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:36.317 "is_configured": true, 00:14:36.317 "data_offset": 2048, 00:14:36.317 "data_size": 63488 00:14:36.317 }, 00:14:36.317 { 00:14:36.317 "name": "BaseBdev2", 00:14:36.317 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:36.317 "is_configured": true, 00:14:36.317 "data_offset": 2048, 00:14:36.317 "data_size": 63488 00:14:36.317 }, 00:14:36.317 { 00:14:36.317 "name": "BaseBdev3", 00:14:36.317 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:36.317 "is_configured": true, 00:14:36.317 "data_offset": 2048, 00:14:36.317 "data_size": 63488 00:14:36.317 } 00:14:36.317 ] 00:14:36.317 }' 00:14:36.318 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.318 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.318 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.318 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.318 18:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.697 "name": "raid_bdev1", 00:14:37.697 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:37.697 "strip_size_kb": 64, 00:14:37.697 "state": "online", 00:14:37.697 "raid_level": "raid5f", 00:14:37.697 "superblock": true, 00:14:37.697 "num_base_bdevs": 3, 00:14:37.697 "num_base_bdevs_discovered": 3, 00:14:37.697 "num_base_bdevs_operational": 3, 00:14:37.697 "process": { 00:14:37.697 "type": "rebuild", 00:14:37.697 "target": "spare", 00:14:37.697 "progress": { 00:14:37.697 "blocks": 69632, 00:14:37.697 "percent": 54 00:14:37.697 } 00:14:37.697 }, 00:14:37.697 "base_bdevs_list": [ 00:14:37.697 { 00:14:37.697 "name": "spare", 00:14:37.697 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:37.697 "is_configured": true, 00:14:37.697 "data_offset": 2048, 00:14:37.697 "data_size": 63488 00:14:37.697 }, 00:14:37.697 { 00:14:37.697 "name": "BaseBdev2", 00:14:37.697 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:37.697 "is_configured": true, 00:14:37.697 "data_offset": 2048, 00:14:37.697 "data_size": 63488 00:14:37.697 }, 00:14:37.697 { 00:14:37.697 "name": "BaseBdev3", 00:14:37.697 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:37.697 "is_configured": true, 00:14:37.697 "data_offset": 2048, 00:14:37.697 "data_size": 63488 00:14:37.697 } 00:14:37.697 ] 00:14:37.697 }' 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.697 18:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.697 18:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.697 18:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.635 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.636 "name": "raid_bdev1", 00:14:38.636 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:38.636 "strip_size_kb": 64, 00:14:38.636 "state": "online", 00:14:38.636 "raid_level": "raid5f", 00:14:38.636 "superblock": true, 00:14:38.636 "num_base_bdevs": 3, 00:14:38.636 "num_base_bdevs_discovered": 3, 00:14:38.636 "num_base_bdevs_operational": 3, 00:14:38.636 "process": { 00:14:38.636 "type": "rebuild", 00:14:38.636 "target": "spare", 00:14:38.636 "progress": { 00:14:38.636 "blocks": 94208, 00:14:38.636 "percent": 74 00:14:38.636 } 00:14:38.636 }, 00:14:38.636 "base_bdevs_list": [ 00:14:38.636 { 00:14:38.636 "name": "spare", 00:14:38.636 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 2048, 00:14:38.636 "data_size": 63488 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev2", 00:14:38.636 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 2048, 00:14:38.636 "data_size": 63488 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev3", 00:14:38.636 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 2048, 00:14:38.636 "data_size": 63488 00:14:38.636 } 00:14:38.636 ] 00:14:38.636 }' 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.636 18:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.018 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.019 "name": "raid_bdev1", 00:14:40.019 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:40.019 "strip_size_kb": 64, 00:14:40.019 "state": "online", 00:14:40.019 "raid_level": "raid5f", 00:14:40.019 "superblock": true, 00:14:40.019 "num_base_bdevs": 3, 00:14:40.019 "num_base_bdevs_discovered": 3, 00:14:40.019 "num_base_bdevs_operational": 3, 00:14:40.019 "process": { 00:14:40.019 "type": "rebuild", 00:14:40.019 "target": "spare", 00:14:40.019 "progress": { 00:14:40.019 "blocks": 116736, 00:14:40.019 "percent": 91 00:14:40.019 } 00:14:40.019 }, 00:14:40.019 "base_bdevs_list": [ 00:14:40.019 { 00:14:40.019 "name": "spare", 00:14:40.019 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:40.019 "is_configured": true, 00:14:40.019 "data_offset": 2048, 00:14:40.019 "data_size": 63488 00:14:40.019 }, 00:14:40.019 { 00:14:40.019 "name": "BaseBdev2", 00:14:40.019 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:40.019 "is_configured": true, 00:14:40.019 "data_offset": 2048, 00:14:40.019 "data_size": 63488 00:14:40.019 }, 00:14:40.019 { 00:14:40.019 "name": "BaseBdev3", 00:14:40.019 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:40.019 "is_configured": true, 00:14:40.019 "data_offset": 2048, 00:14:40.019 "data_size": 63488 00:14:40.019 } 00:14:40.019 ] 00:14:40.019 }' 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.019 18:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.278 [2024-11-28 18:55:09.665105] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:40.278 [2024-11-28 18:55:09.665185] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:40.278 [2024-11-28 18:55:09.665290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.848 "name": "raid_bdev1", 00:14:40.848 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:40.848 "strip_size_kb": 64, 00:14:40.848 "state": "online", 00:14:40.848 "raid_level": "raid5f", 00:14:40.848 "superblock": true, 00:14:40.848 "num_base_bdevs": 3, 00:14:40.848 "num_base_bdevs_discovered": 3, 00:14:40.848 "num_base_bdevs_operational": 3, 00:14:40.848 "base_bdevs_list": [ 00:14:40.848 { 00:14:40.848 "name": "spare", 00:14:40.848 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:40.848 "is_configured": true, 00:14:40.848 "data_offset": 2048, 00:14:40.848 "data_size": 63488 00:14:40.848 }, 00:14:40.848 { 00:14:40.848 "name": "BaseBdev2", 00:14:40.848 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:40.848 "is_configured": true, 00:14:40.848 "data_offset": 2048, 00:14:40.848 "data_size": 63488 00:14:40.848 }, 00:14:40.848 { 00:14:40.848 "name": "BaseBdev3", 00:14:40.848 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:40.848 "is_configured": true, 00:14:40.848 "data_offset": 2048, 00:14:40.848 "data_size": 63488 00:14:40.848 } 00:14:40.848 ] 00:14:40.848 }' 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:40.848 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.108 "name": "raid_bdev1", 00:14:41.108 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:41.108 "strip_size_kb": 64, 00:14:41.108 "state": "online", 00:14:41.108 "raid_level": "raid5f", 00:14:41.108 "superblock": true, 00:14:41.108 "num_base_bdevs": 3, 00:14:41.108 "num_base_bdevs_discovered": 3, 00:14:41.108 "num_base_bdevs_operational": 3, 00:14:41.108 "base_bdevs_list": [ 00:14:41.108 { 00:14:41.108 "name": "spare", 00:14:41.108 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:41.108 "is_configured": true, 00:14:41.108 "data_offset": 2048, 00:14:41.108 "data_size": 63488 00:14:41.108 }, 00:14:41.108 { 00:14:41.108 "name": "BaseBdev2", 00:14:41.108 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:41.108 "is_configured": true, 00:14:41.108 "data_offset": 2048, 00:14:41.108 "data_size": 63488 00:14:41.108 }, 00:14:41.108 { 00:14:41.108 "name": "BaseBdev3", 00:14:41.108 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:41.108 "is_configured": true, 00:14:41.108 "data_offset": 2048, 00:14:41.108 "data_size": 63488 00:14:41.108 } 00:14:41.108 ] 00:14:41.108 }' 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.108 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.109 "name": "raid_bdev1", 00:14:41.109 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:41.109 "strip_size_kb": 64, 00:14:41.109 "state": "online", 00:14:41.109 "raid_level": "raid5f", 00:14:41.109 "superblock": true, 00:14:41.109 "num_base_bdevs": 3, 00:14:41.109 "num_base_bdevs_discovered": 3, 00:14:41.109 "num_base_bdevs_operational": 3, 00:14:41.109 "base_bdevs_list": [ 00:14:41.109 { 00:14:41.109 "name": "spare", 00:14:41.109 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:41.109 "is_configured": true, 00:14:41.109 "data_offset": 2048, 00:14:41.109 "data_size": 63488 00:14:41.109 }, 00:14:41.109 { 00:14:41.109 "name": "BaseBdev2", 00:14:41.109 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:41.109 "is_configured": true, 00:14:41.109 "data_offset": 2048, 00:14:41.109 "data_size": 63488 00:14:41.109 }, 00:14:41.109 { 00:14:41.109 "name": "BaseBdev3", 00:14:41.109 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:41.109 "is_configured": true, 00:14:41.109 "data_offset": 2048, 00:14:41.109 "data_size": 63488 00:14:41.109 } 00:14:41.109 ] 00:14:41.109 }' 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.109 18:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 [2024-11-28 18:55:11.051140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.679 [2024-11-28 18:55:11.051222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.679 [2024-11-28 18:55:11.051334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.679 [2024-11-28 18:55:11.051459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.679 [2024-11-28 18:55:11.051517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:41.679 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.680 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:41.939 /dev/nbd0 00:14:41.939 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.939 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.939 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.940 1+0 records in 00:14:41.940 1+0 records out 00:14:41.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370267 s, 11.1 MB/s 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.940 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:41.940 /dev/nbd1 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.200 1+0 records in 00:14:42.200 1+0 records out 00:14:42.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00233701 s, 1.8 MB/s 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.200 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.460 18:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 [2024-11-28 18:55:12.107872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.721 [2024-11-28 18:55:12.107986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.721 [2024-11-28 18:55:12.108010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:42.721 [2024-11-28 18:55:12.108021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.721 [2024-11-28 18:55:12.110183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.721 [2024-11-28 18:55:12.110228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.721 [2024-11-28 18:55:12.110307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.721 [2024-11-28 18:55:12.110354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.721 [2024-11-28 18:55:12.110500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.721 [2024-11-28 18:55:12.110597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.721 spare 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 [2024-11-28 18:55:12.210661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:42.721 [2024-11-28 18:55:12.210734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.721 [2024-11-28 18:55:12.211021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:14:42.721 [2024-11-28 18:55:12.211435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:42.721 [2024-11-28 18:55:12.211448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:42.721 [2024-11-28 18:55:12.211604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.721 "name": "raid_bdev1", 00:14:42.721 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:42.721 "strip_size_kb": 64, 00:14:42.721 "state": "online", 00:14:42.721 "raid_level": "raid5f", 00:14:42.721 "superblock": true, 00:14:42.721 "num_base_bdevs": 3, 00:14:42.721 "num_base_bdevs_discovered": 3, 00:14:42.721 "num_base_bdevs_operational": 3, 00:14:42.721 "base_bdevs_list": [ 00:14:42.721 { 00:14:42.721 "name": "spare", 00:14:42.721 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:42.721 "is_configured": true, 00:14:42.721 "data_offset": 2048, 00:14:42.721 "data_size": 63488 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "name": "BaseBdev2", 00:14:42.721 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:42.721 "is_configured": true, 00:14:42.721 "data_offset": 2048, 00:14:42.721 "data_size": 63488 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "name": "BaseBdev3", 00:14:42.721 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:42.721 "is_configured": true, 00:14:42.721 "data_offset": 2048, 00:14:42.721 "data_size": 63488 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }' 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.721 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.290 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.290 "name": "raid_bdev1", 00:14:43.290 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:43.291 "strip_size_kb": 64, 00:14:43.291 "state": "online", 00:14:43.291 "raid_level": "raid5f", 00:14:43.291 "superblock": true, 00:14:43.291 "num_base_bdevs": 3, 00:14:43.291 "num_base_bdevs_discovered": 3, 00:14:43.291 "num_base_bdevs_operational": 3, 00:14:43.291 "base_bdevs_list": [ 00:14:43.291 { 00:14:43.291 "name": "spare", 00:14:43.291 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 }, 00:14:43.291 { 00:14:43.291 "name": "BaseBdev2", 00:14:43.291 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 }, 00:14:43.291 { 00:14:43.291 "name": "BaseBdev3", 00:14:43.291 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 } 00:14:43.291 ] 00:14:43.291 }' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.291 [2024-11-28 18:55:12.828983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.291 "name": "raid_bdev1", 00:14:43.291 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:43.291 "strip_size_kb": 64, 00:14:43.291 "state": "online", 00:14:43.291 "raid_level": "raid5f", 00:14:43.291 "superblock": true, 00:14:43.291 "num_base_bdevs": 3, 00:14:43.291 "num_base_bdevs_discovered": 2, 00:14:43.291 "num_base_bdevs_operational": 2, 00:14:43.291 "base_bdevs_list": [ 00:14:43.291 { 00:14:43.291 "name": null, 00:14:43.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.291 "is_configured": false, 00:14:43.291 "data_offset": 0, 00:14:43.291 "data_size": 63488 00:14:43.291 }, 00:14:43.291 { 00:14:43.291 "name": "BaseBdev2", 00:14:43.291 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 }, 00:14:43.291 { 00:14:43.291 "name": "BaseBdev3", 00:14:43.291 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:43.291 "is_configured": true, 00:14:43.291 "data_offset": 2048, 00:14:43.291 "data_size": 63488 00:14:43.291 } 00:14:43.291 ] 00:14:43.291 }' 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.291 18:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.860 18:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.860 18:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.860 18:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.860 [2024-11-28 18:55:13.285123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.861 [2024-11-28 18:55:13.285309] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.861 [2024-11-28 18:55:13.285387] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:43.861 [2024-11-28 18:55:13.285460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.861 [2024-11-28 18:55:13.289850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:14:43.861 18:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.861 18:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:43.861 [2024-11-28 18:55:13.292099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.800 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.801 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.801 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.801 "name": "raid_bdev1", 00:14:44.801 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:44.801 "strip_size_kb": 64, 00:14:44.801 "state": "online", 00:14:44.801 "raid_level": "raid5f", 00:14:44.801 "superblock": true, 00:14:44.801 "num_base_bdevs": 3, 00:14:44.801 "num_base_bdevs_discovered": 3, 00:14:44.801 "num_base_bdevs_operational": 3, 00:14:44.801 "process": { 00:14:44.801 "type": "rebuild", 00:14:44.801 "target": "spare", 00:14:44.801 "progress": { 00:14:44.801 "blocks": 20480, 00:14:44.801 "percent": 16 00:14:44.801 } 00:14:44.801 }, 00:14:44.801 "base_bdevs_list": [ 00:14:44.801 { 00:14:44.801 "name": "spare", 00:14:44.801 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:44.801 "is_configured": true, 00:14:44.801 "data_offset": 2048, 00:14:44.801 "data_size": 63488 00:14:44.801 }, 00:14:44.801 { 00:14:44.801 "name": "BaseBdev2", 00:14:44.801 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:44.801 "is_configured": true, 00:14:44.801 "data_offset": 2048, 00:14:44.801 "data_size": 63488 00:14:44.801 }, 00:14:44.801 { 00:14:44.801 "name": "BaseBdev3", 00:14:44.801 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:44.801 "is_configured": true, 00:14:44.801 "data_offset": 2048, 00:14:44.801 "data_size": 63488 00:14:44.801 } 00:14:44.801 ] 00:14:44.801 }' 00:14:44.801 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.801 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.801 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 [2024-11-28 18:55:14.446319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.061 [2024-11-28 18:55:14.500948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.061 [2024-11-28 18:55:14.501005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.061 [2024-11-28 18:55:14.501019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.061 [2024-11-28 18:55:14.501033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.061 "name": "raid_bdev1", 00:14:45.061 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:45.061 "strip_size_kb": 64, 00:14:45.061 "state": "online", 00:14:45.061 "raid_level": "raid5f", 00:14:45.061 "superblock": true, 00:14:45.061 "num_base_bdevs": 3, 00:14:45.061 "num_base_bdevs_discovered": 2, 00:14:45.061 "num_base_bdevs_operational": 2, 00:14:45.061 "base_bdevs_list": [ 00:14:45.061 { 00:14:45.061 "name": null, 00:14:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.061 "is_configured": false, 00:14:45.061 "data_offset": 0, 00:14:45.061 "data_size": 63488 00:14:45.061 }, 00:14:45.061 { 00:14:45.061 "name": "BaseBdev2", 00:14:45.061 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:45.061 "is_configured": true, 00:14:45.061 "data_offset": 2048, 00:14:45.061 "data_size": 63488 00:14:45.061 }, 00:14:45.061 { 00:14:45.061 "name": "BaseBdev3", 00:14:45.061 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:45.061 "is_configured": true, 00:14:45.061 "data_offset": 2048, 00:14:45.061 "data_size": 63488 00:14:45.061 } 00:14:45.061 ] 00:14:45.061 }' 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.061 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.635 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.635 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.635 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.635 [2024-11-28 18:55:14.938391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.635 [2024-11-28 18:55:14.938508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.635 [2024-11-28 18:55:14.938547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:45.635 [2024-11-28 18:55:14.938583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.635 [2024-11-28 18:55:14.939041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.635 [2024-11-28 18:55:14.939106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.635 [2024-11-28 18:55:14.939223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.635 [2024-11-28 18:55:14.939268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.635 [2024-11-28 18:55:14.939308] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.635 [2024-11-28 18:55:14.939381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.635 [2024-11-28 18:55:14.943555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:14:45.635 spare 00:14:45.635 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.635 18:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:45.635 [2024-11-28 18:55:14.945659] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.572 18:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.572 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.572 "name": "raid_bdev1", 00:14:46.572 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:46.572 "strip_size_kb": 64, 00:14:46.572 "state": "online", 00:14:46.572 "raid_level": "raid5f", 00:14:46.572 "superblock": true, 00:14:46.572 "num_base_bdevs": 3, 00:14:46.572 "num_base_bdevs_discovered": 3, 00:14:46.572 "num_base_bdevs_operational": 3, 00:14:46.572 "process": { 00:14:46.572 "type": "rebuild", 00:14:46.572 "target": "spare", 00:14:46.572 "progress": { 00:14:46.572 "blocks": 20480, 00:14:46.572 "percent": 16 00:14:46.572 } 00:14:46.572 }, 00:14:46.572 "base_bdevs_list": [ 00:14:46.572 { 00:14:46.572 "name": "spare", 00:14:46.572 "uuid": "9b0d540d-10f2-5753-a525-ba476422e316", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 }, 00:14:46.572 { 00:14:46.572 "name": "BaseBdev2", 00:14:46.572 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 }, 00:14:46.572 { 00:14:46.572 "name": "BaseBdev3", 00:14:46.572 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 } 00:14:46.572 ] 00:14:46.572 }' 00:14:46.572 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.572 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.572 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.573 [2024-11-28 18:55:16.111895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.573 [2024-11-28 18:55:16.154472] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.573 [2024-11-28 18:55:16.154583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.573 [2024-11-28 18:55:16.154623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.573 [2024-11-28 18:55:16.154643] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.573 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.831 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.831 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.831 "name": "raid_bdev1", 00:14:46.831 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:46.831 "strip_size_kb": 64, 00:14:46.831 "state": "online", 00:14:46.831 "raid_level": "raid5f", 00:14:46.831 "superblock": true, 00:14:46.831 "num_base_bdevs": 3, 00:14:46.831 "num_base_bdevs_discovered": 2, 00:14:46.831 "num_base_bdevs_operational": 2, 00:14:46.831 "base_bdevs_list": [ 00:14:46.831 { 00:14:46.831 "name": null, 00:14:46.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.831 "is_configured": false, 00:14:46.831 "data_offset": 0, 00:14:46.831 "data_size": 63488 00:14:46.831 }, 00:14:46.831 { 00:14:46.831 "name": "BaseBdev2", 00:14:46.831 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:46.831 "is_configured": true, 00:14:46.831 "data_offset": 2048, 00:14:46.831 "data_size": 63488 00:14:46.831 }, 00:14:46.831 { 00:14:46.831 "name": "BaseBdev3", 00:14:46.831 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:46.832 "is_configured": true, 00:14:46.832 "data_offset": 2048, 00:14:46.832 "data_size": 63488 00:14:46.832 } 00:14:46.832 ] 00:14:46.832 }' 00:14:46.832 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.832 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.090 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.090 "name": "raid_bdev1", 00:14:47.090 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:47.090 "strip_size_kb": 64, 00:14:47.090 "state": "online", 00:14:47.090 "raid_level": "raid5f", 00:14:47.090 "superblock": true, 00:14:47.090 "num_base_bdevs": 3, 00:14:47.090 "num_base_bdevs_discovered": 2, 00:14:47.090 "num_base_bdevs_operational": 2, 00:14:47.090 "base_bdevs_list": [ 00:14:47.090 { 00:14:47.090 "name": null, 00:14:47.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.090 "is_configured": false, 00:14:47.090 "data_offset": 0, 00:14:47.090 "data_size": 63488 00:14:47.090 }, 00:14:47.090 { 00:14:47.090 "name": "BaseBdev2", 00:14:47.090 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:47.090 "is_configured": true, 00:14:47.090 "data_offset": 2048, 00:14:47.090 "data_size": 63488 00:14:47.090 }, 00:14:47.090 { 00:14:47.090 "name": "BaseBdev3", 00:14:47.090 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:47.090 "is_configured": true, 00:14:47.090 "data_offset": 2048, 00:14:47.090 "data_size": 63488 00:14:47.090 } 00:14:47.090 ] 00:14:47.091 }' 00:14:47.091 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.349 [2024-11-28 18:55:16.772153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.349 [2024-11-28 18:55:16.772242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.349 [2024-11-28 18:55:16.772267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:47.349 [2024-11-28 18:55:16.772276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.349 [2024-11-28 18:55:16.772701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.349 [2024-11-28 18:55:16.772718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.349 [2024-11-28 18:55:16.772786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:47.349 [2024-11-28 18:55:16.772809] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.349 [2024-11-28 18:55:16.772819] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:47.349 [2024-11-28 18:55:16.772827] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:47.349 BaseBdev1 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.349 18:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.284 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.284 "name": "raid_bdev1", 00:14:48.284 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:48.284 "strip_size_kb": 64, 00:14:48.284 "state": "online", 00:14:48.284 "raid_level": "raid5f", 00:14:48.284 "superblock": true, 00:14:48.284 "num_base_bdevs": 3, 00:14:48.284 "num_base_bdevs_discovered": 2, 00:14:48.284 "num_base_bdevs_operational": 2, 00:14:48.284 "base_bdevs_list": [ 00:14:48.284 { 00:14:48.285 "name": null, 00:14:48.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.285 "is_configured": false, 00:14:48.285 "data_offset": 0, 00:14:48.285 "data_size": 63488 00:14:48.285 }, 00:14:48.285 { 00:14:48.285 "name": "BaseBdev2", 00:14:48.285 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:48.285 "is_configured": true, 00:14:48.285 "data_offset": 2048, 00:14:48.285 "data_size": 63488 00:14:48.285 }, 00:14:48.285 { 00:14:48.285 "name": "BaseBdev3", 00:14:48.285 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:48.285 "is_configured": true, 00:14:48.285 "data_offset": 2048, 00:14:48.285 "data_size": 63488 00:14:48.285 } 00:14:48.285 ] 00:14:48.285 }' 00:14:48.285 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.285 18:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.854 "name": "raid_bdev1", 00:14:48.854 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:48.854 "strip_size_kb": 64, 00:14:48.854 "state": "online", 00:14:48.854 "raid_level": "raid5f", 00:14:48.854 "superblock": true, 00:14:48.854 "num_base_bdevs": 3, 00:14:48.854 "num_base_bdevs_discovered": 2, 00:14:48.854 "num_base_bdevs_operational": 2, 00:14:48.854 "base_bdevs_list": [ 00:14:48.854 { 00:14:48.854 "name": null, 00:14:48.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.854 "is_configured": false, 00:14:48.854 "data_offset": 0, 00:14:48.854 "data_size": 63488 00:14:48.854 }, 00:14:48.854 { 00:14:48.854 "name": "BaseBdev2", 00:14:48.854 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:48.854 "is_configured": true, 00:14:48.854 "data_offset": 2048, 00:14:48.854 "data_size": 63488 00:14:48.854 }, 00:14:48.854 { 00:14:48.854 "name": "BaseBdev3", 00:14:48.854 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:48.854 "is_configured": true, 00:14:48.854 "data_offset": 2048, 00:14:48.854 "data_size": 63488 00:14:48.854 } 00:14:48.854 ] 00:14:48.854 }' 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.854 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 [2024-11-28 18:55:18.372607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.854 [2024-11-28 18:55:18.372730] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:48.854 [2024-11-28 18:55:18.372744] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:48.854 request: 00:14:48.854 { 00:14:48.854 "base_bdev": "BaseBdev1", 00:14:48.854 "raid_bdev": "raid_bdev1", 00:14:48.854 "method": "bdev_raid_add_base_bdev", 00:14:48.854 "req_id": 1 00:14:48.854 } 00:14:48.854 Got JSON-RPC error response 00:14:48.855 response: 00:14:48.855 { 00:14:48.855 "code": -22, 00:14:48.855 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:48.855 } 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.855 18:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.793 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.052 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.052 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.052 "name": "raid_bdev1", 00:14:50.052 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:50.052 "strip_size_kb": 64, 00:14:50.052 "state": "online", 00:14:50.052 "raid_level": "raid5f", 00:14:50.052 "superblock": true, 00:14:50.052 "num_base_bdevs": 3, 00:14:50.052 "num_base_bdevs_discovered": 2, 00:14:50.052 "num_base_bdevs_operational": 2, 00:14:50.052 "base_bdevs_list": [ 00:14:50.052 { 00:14:50.052 "name": null, 00:14:50.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.052 "is_configured": false, 00:14:50.052 "data_offset": 0, 00:14:50.052 "data_size": 63488 00:14:50.052 }, 00:14:50.052 { 00:14:50.052 "name": "BaseBdev2", 00:14:50.052 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:50.052 "is_configured": true, 00:14:50.052 "data_offset": 2048, 00:14:50.052 "data_size": 63488 00:14:50.052 }, 00:14:50.052 { 00:14:50.052 "name": "BaseBdev3", 00:14:50.052 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:50.052 "is_configured": true, 00:14:50.052 "data_offset": 2048, 00:14:50.052 "data_size": 63488 00:14:50.052 } 00:14:50.052 ] 00:14:50.052 }' 00:14:50.053 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.053 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.312 "name": "raid_bdev1", 00:14:50.312 "uuid": "ee317e53-0622-4f47-88c9-7b3569ed53bd", 00:14:50.312 "strip_size_kb": 64, 00:14:50.312 "state": "online", 00:14:50.312 "raid_level": "raid5f", 00:14:50.312 "superblock": true, 00:14:50.312 "num_base_bdevs": 3, 00:14:50.312 "num_base_bdevs_discovered": 2, 00:14:50.312 "num_base_bdevs_operational": 2, 00:14:50.312 "base_bdevs_list": [ 00:14:50.312 { 00:14:50.312 "name": null, 00:14:50.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.312 "is_configured": false, 00:14:50.312 "data_offset": 0, 00:14:50.312 "data_size": 63488 00:14:50.312 }, 00:14:50.312 { 00:14:50.312 "name": "BaseBdev2", 00:14:50.312 "uuid": "ac0d3389-a33d-549a-9a05-076dbd8782d8", 00:14:50.312 "is_configured": true, 00:14:50.312 "data_offset": 2048, 00:14:50.312 "data_size": 63488 00:14:50.312 }, 00:14:50.312 { 00:14:50.312 "name": "BaseBdev3", 00:14:50.312 "uuid": "fc00cecd-50cb-5d14-97d9-b273fe1c6189", 00:14:50.312 "is_configured": true, 00:14:50.312 "data_offset": 2048, 00:14:50.312 "data_size": 63488 00:14:50.312 } 00:14:50.312 ] 00:14:50.312 }' 00:14:50.312 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94000 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94000 ']' 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94000 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.572 18:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94000 00:14:50.572 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.572 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.572 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94000' 00:14:50.572 killing process with pid 94000 00:14:50.572 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94000 00:14:50.572 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.572 00:14:50.572 Latency(us) 00:14:50.572 [2024-11-28T18:55:20.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.572 [2024-11-28T18:55:20.178Z] =================================================================================================================== 00:14:50.572 [2024-11-28T18:55:20.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.572 [2024-11-28 18:55:20.009543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.572 [2024-11-28 18:55:20.009662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.572 [2024-11-28 18:55:20.009718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.572 [2024-11-28 18:55:20.009729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.572 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94000 00:14:50.572 [2024-11-28 18:55:20.050126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.832 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.832 ************************************ 00:14:50.832 END TEST raid5f_rebuild_test_sb 00:14:50.832 ************************************ 00:14:50.832 00:14:50.832 real 0m21.632s 00:14:50.832 user 0m28.183s 00:14:50.832 sys 0m2.776s 00:14:50.832 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.832 18:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.832 18:55:20 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:50.832 18:55:20 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:50.832 18:55:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:50.832 18:55:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.832 18:55:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.832 ************************************ 00:14:50.832 START TEST raid5f_state_function_test 00:14:50.832 ************************************ 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94738 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94738' 00:14:50.832 Process raid pid: 94738 00:14:50.832 18:55:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94738 00:14:50.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 94738 ']' 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.833 18:55:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.093 [2024-11-28 18:55:20.444716] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:51.093 [2024-11-28 18:55:20.444865] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.093 [2024-11-28 18:55:20.587229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:51.093 [2024-11-28 18:55:20.625762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.093 [2024-11-28 18:55:20.652948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.093 [2024-11-28 18:55:20.697098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.353 [2024-11-28 18:55:20.697228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.921 [2024-11-28 18:55:21.257625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.921 [2024-11-28 18:55:21.257760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.921 [2024-11-28 18:55:21.257778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.921 [2024-11-28 18:55:21.257787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.921 [2024-11-28 18:55:21.257797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.921 [2024-11-28 18:55:21.257803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.921 [2024-11-28 18:55:21.257811] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.921 [2024-11-28 18:55:21.257818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.921 "name": "Existed_Raid", 00:14:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.921 "strip_size_kb": 64, 00:14:51.921 "state": "configuring", 00:14:51.921 "raid_level": "raid5f", 00:14:51.921 "superblock": false, 00:14:51.921 "num_base_bdevs": 4, 00:14:51.921 "num_base_bdevs_discovered": 0, 00:14:51.921 "num_base_bdevs_operational": 4, 00:14:51.921 "base_bdevs_list": [ 00:14:51.921 { 00:14:51.921 "name": "BaseBdev1", 00:14:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.921 "is_configured": false, 00:14:51.921 "data_offset": 0, 00:14:51.921 "data_size": 0 00:14:51.921 }, 00:14:51.921 { 00:14:51.921 "name": "BaseBdev2", 00:14:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.921 "is_configured": false, 00:14:51.921 "data_offset": 0, 00:14:51.921 "data_size": 0 00:14:51.921 }, 00:14:51.921 { 00:14:51.921 "name": "BaseBdev3", 00:14:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.921 "is_configured": false, 00:14:51.921 "data_offset": 0, 00:14:51.921 "data_size": 0 00:14:51.921 }, 00:14:51.921 { 00:14:51.921 "name": "BaseBdev4", 00:14:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.921 "is_configured": false, 00:14:51.921 "data_offset": 0, 00:14:51.921 "data_size": 0 00:14:51.921 } 00:14:51.921 ] 00:14:51.921 }' 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.921 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 [2024-11-28 18:55:21.701635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.180 [2024-11-28 18:55:21.701667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 [2024-11-28 18:55:21.713678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.180 [2024-11-28 18:55:21.713717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.180 [2024-11-28 18:55:21.713728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.180 [2024-11-28 18:55:21.713751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.180 [2024-11-28 18:55:21.713758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.180 [2024-11-28 18:55:21.713765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.180 [2024-11-28 18:55:21.713772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.180 [2024-11-28 18:55:21.713779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 [2024-11-28 18:55:21.734531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.180 BaseBdev1 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.180 [ 00:14:52.180 { 00:14:52.180 "name": "BaseBdev1", 00:14:52.180 "aliases": [ 00:14:52.180 "cb28904f-7965-4cdd-8ff0-2370f04ad523" 00:14:52.180 ], 00:14:52.180 "product_name": "Malloc disk", 00:14:52.180 "block_size": 512, 00:14:52.180 "num_blocks": 65536, 00:14:52.180 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:52.180 "assigned_rate_limits": { 00:14:52.180 "rw_ios_per_sec": 0, 00:14:52.180 "rw_mbytes_per_sec": 0, 00:14:52.180 "r_mbytes_per_sec": 0, 00:14:52.180 "w_mbytes_per_sec": 0 00:14:52.180 }, 00:14:52.180 "claimed": true, 00:14:52.180 "claim_type": "exclusive_write", 00:14:52.180 "zoned": false, 00:14:52.180 "supported_io_types": { 00:14:52.180 "read": true, 00:14:52.180 "write": true, 00:14:52.180 "unmap": true, 00:14:52.180 "flush": true, 00:14:52.180 "reset": true, 00:14:52.180 "nvme_admin": false, 00:14:52.180 "nvme_io": false, 00:14:52.180 "nvme_io_md": false, 00:14:52.180 "write_zeroes": true, 00:14:52.180 "zcopy": true, 00:14:52.180 "get_zone_info": false, 00:14:52.180 "zone_management": false, 00:14:52.180 "zone_append": false, 00:14:52.180 "compare": false, 00:14:52.180 "compare_and_write": false, 00:14:52.180 "abort": true, 00:14:52.180 "seek_hole": false, 00:14:52.180 "seek_data": false, 00:14:52.180 "copy": true, 00:14:52.180 "nvme_iov_md": false 00:14:52.180 }, 00:14:52.180 "memory_domains": [ 00:14:52.180 { 00:14:52.180 "dma_device_id": "system", 00:14:52.180 "dma_device_type": 1 00:14:52.180 }, 00:14:52.180 { 00:14:52.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.180 "dma_device_type": 2 00:14:52.180 } 00:14:52.180 ], 00:14:52.180 "driver_specific": {} 00:14:52.180 } 00:14:52.180 ] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.180 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.440 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.440 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.440 "name": "Existed_Raid", 00:14:52.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.440 "strip_size_kb": 64, 00:14:52.440 "state": "configuring", 00:14:52.440 "raid_level": "raid5f", 00:14:52.440 "superblock": false, 00:14:52.440 "num_base_bdevs": 4, 00:14:52.440 "num_base_bdevs_discovered": 1, 00:14:52.440 "num_base_bdevs_operational": 4, 00:14:52.440 "base_bdevs_list": [ 00:14:52.440 { 00:14:52.440 "name": "BaseBdev1", 00:14:52.440 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:52.440 "is_configured": true, 00:14:52.440 "data_offset": 0, 00:14:52.440 "data_size": 65536 00:14:52.440 }, 00:14:52.440 { 00:14:52.440 "name": "BaseBdev2", 00:14:52.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.440 "is_configured": false, 00:14:52.440 "data_offset": 0, 00:14:52.440 "data_size": 0 00:14:52.440 }, 00:14:52.440 { 00:14:52.440 "name": "BaseBdev3", 00:14:52.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.440 "is_configured": false, 00:14:52.440 "data_offset": 0, 00:14:52.440 "data_size": 0 00:14:52.440 }, 00:14:52.440 { 00:14:52.440 "name": "BaseBdev4", 00:14:52.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.440 "is_configured": false, 00:14:52.440 "data_offset": 0, 00:14:52.440 "data_size": 0 00:14:52.440 } 00:14:52.440 ] 00:14:52.440 }' 00:14:52.440 18:55:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.440 18:55:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 [2024-11-28 18:55:22.258685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.700 [2024-11-28 18:55:22.258732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 [2024-11-28 18:55:22.270728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.700 [2024-11-28 18:55:22.272567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.700 [2024-11-28 18:55:22.272648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.700 [2024-11-28 18:55:22.272663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.700 [2024-11-28 18:55:22.272670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.700 [2024-11-28 18:55:22.272678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.700 [2024-11-28 18:55:22.272684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.700 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.960 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.960 "name": "Existed_Raid", 00:14:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.960 "strip_size_kb": 64, 00:14:52.960 "state": "configuring", 00:14:52.960 "raid_level": "raid5f", 00:14:52.960 "superblock": false, 00:14:52.960 "num_base_bdevs": 4, 00:14:52.960 "num_base_bdevs_discovered": 1, 00:14:52.960 "num_base_bdevs_operational": 4, 00:14:52.960 "base_bdevs_list": [ 00:14:52.960 { 00:14:52.960 "name": "BaseBdev1", 00:14:52.960 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:52.960 "is_configured": true, 00:14:52.960 "data_offset": 0, 00:14:52.960 "data_size": 65536 00:14:52.960 }, 00:14:52.960 { 00:14:52.960 "name": "BaseBdev2", 00:14:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.960 "is_configured": false, 00:14:52.960 "data_offset": 0, 00:14:52.960 "data_size": 0 00:14:52.960 }, 00:14:52.960 { 00:14:52.960 "name": "BaseBdev3", 00:14:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.960 "is_configured": false, 00:14:52.960 "data_offset": 0, 00:14:52.960 "data_size": 0 00:14:52.960 }, 00:14:52.960 { 00:14:52.960 "name": "BaseBdev4", 00:14:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.961 "is_configured": false, 00:14:52.961 "data_offset": 0, 00:14:52.961 "data_size": 0 00:14:52.961 } 00:14:52.961 ] 00:14:52.961 }' 00:14:52.961 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.961 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.221 [2024-11-28 18:55:22.757888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.221 BaseBdev2 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.221 [ 00:14:53.221 { 00:14:53.221 "name": "BaseBdev2", 00:14:53.221 "aliases": [ 00:14:53.221 "22311944-ad65-4e06-a12d-d21c24a55162" 00:14:53.221 ], 00:14:53.221 "product_name": "Malloc disk", 00:14:53.221 "block_size": 512, 00:14:53.221 "num_blocks": 65536, 00:14:53.221 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:53.221 "assigned_rate_limits": { 00:14:53.221 "rw_ios_per_sec": 0, 00:14:53.221 "rw_mbytes_per_sec": 0, 00:14:53.221 "r_mbytes_per_sec": 0, 00:14:53.221 "w_mbytes_per_sec": 0 00:14:53.221 }, 00:14:53.221 "claimed": true, 00:14:53.221 "claim_type": "exclusive_write", 00:14:53.221 "zoned": false, 00:14:53.221 "supported_io_types": { 00:14:53.221 "read": true, 00:14:53.221 "write": true, 00:14:53.221 "unmap": true, 00:14:53.221 "flush": true, 00:14:53.221 "reset": true, 00:14:53.221 "nvme_admin": false, 00:14:53.221 "nvme_io": false, 00:14:53.221 "nvme_io_md": false, 00:14:53.221 "write_zeroes": true, 00:14:53.221 "zcopy": true, 00:14:53.221 "get_zone_info": false, 00:14:53.221 "zone_management": false, 00:14:53.221 "zone_append": false, 00:14:53.221 "compare": false, 00:14:53.221 "compare_and_write": false, 00:14:53.221 "abort": true, 00:14:53.221 "seek_hole": false, 00:14:53.221 "seek_data": false, 00:14:53.221 "copy": true, 00:14:53.221 "nvme_iov_md": false 00:14:53.221 }, 00:14:53.221 "memory_domains": [ 00:14:53.221 { 00:14:53.221 "dma_device_id": "system", 00:14:53.221 "dma_device_type": 1 00:14:53.221 }, 00:14:53.221 { 00:14:53.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.221 "dma_device_type": 2 00:14:53.221 } 00:14:53.221 ], 00:14:53.221 "driver_specific": {} 00:14:53.221 } 00:14:53.221 ] 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.221 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.482 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.482 "name": "Existed_Raid", 00:14:53.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.482 "strip_size_kb": 64, 00:14:53.482 "state": "configuring", 00:14:53.482 "raid_level": "raid5f", 00:14:53.482 "superblock": false, 00:14:53.482 "num_base_bdevs": 4, 00:14:53.482 "num_base_bdevs_discovered": 2, 00:14:53.482 "num_base_bdevs_operational": 4, 00:14:53.482 "base_bdevs_list": [ 00:14:53.482 { 00:14:53.482 "name": "BaseBdev1", 00:14:53.482 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:53.482 "is_configured": true, 00:14:53.482 "data_offset": 0, 00:14:53.482 "data_size": 65536 00:14:53.482 }, 00:14:53.482 { 00:14:53.482 "name": "BaseBdev2", 00:14:53.482 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:53.482 "is_configured": true, 00:14:53.482 "data_offset": 0, 00:14:53.482 "data_size": 65536 00:14:53.482 }, 00:14:53.482 { 00:14:53.482 "name": "BaseBdev3", 00:14:53.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.482 "is_configured": false, 00:14:53.482 "data_offset": 0, 00:14:53.482 "data_size": 0 00:14:53.482 }, 00:14:53.482 { 00:14:53.482 "name": "BaseBdev4", 00:14:53.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.482 "is_configured": false, 00:14:53.482 "data_offset": 0, 00:14:53.482 "data_size": 0 00:14:53.482 } 00:14:53.482 ] 00:14:53.482 }' 00:14:53.482 18:55:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.482 18:55:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.742 [2024-11-28 18:55:23.238140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.742 BaseBdev3 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.742 [ 00:14:53.742 { 00:14:53.742 "name": "BaseBdev3", 00:14:53.742 "aliases": [ 00:14:53.742 "d9749003-a4df-4153-9fd3-b4eda41b30df" 00:14:53.742 ], 00:14:53.742 "product_name": "Malloc disk", 00:14:53.742 "block_size": 512, 00:14:53.742 "num_blocks": 65536, 00:14:53.742 "uuid": "d9749003-a4df-4153-9fd3-b4eda41b30df", 00:14:53.742 "assigned_rate_limits": { 00:14:53.742 "rw_ios_per_sec": 0, 00:14:53.742 "rw_mbytes_per_sec": 0, 00:14:53.742 "r_mbytes_per_sec": 0, 00:14:53.742 "w_mbytes_per_sec": 0 00:14:53.742 }, 00:14:53.742 "claimed": true, 00:14:53.742 "claim_type": "exclusive_write", 00:14:53.742 "zoned": false, 00:14:53.742 "supported_io_types": { 00:14:53.742 "read": true, 00:14:53.742 "write": true, 00:14:53.742 "unmap": true, 00:14:53.742 "flush": true, 00:14:53.742 "reset": true, 00:14:53.742 "nvme_admin": false, 00:14:53.742 "nvme_io": false, 00:14:53.742 "nvme_io_md": false, 00:14:53.742 "write_zeroes": true, 00:14:53.742 "zcopy": true, 00:14:53.742 "get_zone_info": false, 00:14:53.742 "zone_management": false, 00:14:53.742 "zone_append": false, 00:14:53.742 "compare": false, 00:14:53.742 "compare_and_write": false, 00:14:53.742 "abort": true, 00:14:53.742 "seek_hole": false, 00:14:53.742 "seek_data": false, 00:14:53.742 "copy": true, 00:14:53.742 "nvme_iov_md": false 00:14:53.742 }, 00:14:53.742 "memory_domains": [ 00:14:53.742 { 00:14:53.742 "dma_device_id": "system", 00:14:53.742 "dma_device_type": 1 00:14:53.742 }, 00:14:53.742 { 00:14:53.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.742 "dma_device_type": 2 00:14:53.742 } 00:14:53.742 ], 00:14:53.742 "driver_specific": {} 00:14:53.742 } 00:14:53.742 ] 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.742 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.743 "name": "Existed_Raid", 00:14:53.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.743 "strip_size_kb": 64, 00:14:53.743 "state": "configuring", 00:14:53.743 "raid_level": "raid5f", 00:14:53.743 "superblock": false, 00:14:53.743 "num_base_bdevs": 4, 00:14:53.743 "num_base_bdevs_discovered": 3, 00:14:53.743 "num_base_bdevs_operational": 4, 00:14:53.743 "base_bdevs_list": [ 00:14:53.743 { 00:14:53.743 "name": "BaseBdev1", 00:14:53.743 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:53.743 "is_configured": true, 00:14:53.743 "data_offset": 0, 00:14:53.743 "data_size": 65536 00:14:53.743 }, 00:14:53.743 { 00:14:53.743 "name": "BaseBdev2", 00:14:53.743 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:53.743 "is_configured": true, 00:14:53.743 "data_offset": 0, 00:14:53.743 "data_size": 65536 00:14:53.743 }, 00:14:53.743 { 00:14:53.743 "name": "BaseBdev3", 00:14:53.743 "uuid": "d9749003-a4df-4153-9fd3-b4eda41b30df", 00:14:53.743 "is_configured": true, 00:14:53.743 "data_offset": 0, 00:14:53.743 "data_size": 65536 00:14:53.743 }, 00:14:53.743 { 00:14:53.743 "name": "BaseBdev4", 00:14:53.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.743 "is_configured": false, 00:14:53.743 "data_offset": 0, 00:14:53.743 "data_size": 0 00:14:53.743 } 00:14:53.743 ] 00:14:53.743 }' 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.743 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.312 [2024-11-28 18:55:23.753402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.312 [2024-11-28 18:55:23.753566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:54.312 [2024-11-28 18:55:23.753586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:54.312 [2024-11-28 18:55:23.753908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:54.312 [2024-11-28 18:55:23.754371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:54.312 [2024-11-28 18:55:23.754382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:54.312 [2024-11-28 18:55:23.754592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.312 BaseBdev4 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.312 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.312 [ 00:14:54.312 { 00:14:54.312 "name": "BaseBdev4", 00:14:54.312 "aliases": [ 00:14:54.312 "6e0c101b-bf3e-40c6-b72c-d75c4c263d18" 00:14:54.312 ], 00:14:54.312 "product_name": "Malloc disk", 00:14:54.312 "block_size": 512, 00:14:54.312 "num_blocks": 65536, 00:14:54.312 "uuid": "6e0c101b-bf3e-40c6-b72c-d75c4c263d18", 00:14:54.312 "assigned_rate_limits": { 00:14:54.312 "rw_ios_per_sec": 0, 00:14:54.312 "rw_mbytes_per_sec": 0, 00:14:54.313 "r_mbytes_per_sec": 0, 00:14:54.313 "w_mbytes_per_sec": 0 00:14:54.313 }, 00:14:54.313 "claimed": true, 00:14:54.313 "claim_type": "exclusive_write", 00:14:54.313 "zoned": false, 00:14:54.313 "supported_io_types": { 00:14:54.313 "read": true, 00:14:54.313 "write": true, 00:14:54.313 "unmap": true, 00:14:54.313 "flush": true, 00:14:54.313 "reset": true, 00:14:54.313 "nvme_admin": false, 00:14:54.313 "nvme_io": false, 00:14:54.313 "nvme_io_md": false, 00:14:54.313 "write_zeroes": true, 00:14:54.313 "zcopy": true, 00:14:54.313 "get_zone_info": false, 00:14:54.313 "zone_management": false, 00:14:54.313 "zone_append": false, 00:14:54.313 "compare": false, 00:14:54.313 "compare_and_write": false, 00:14:54.313 "abort": true, 00:14:54.313 "seek_hole": false, 00:14:54.313 "seek_data": false, 00:14:54.313 "copy": true, 00:14:54.313 "nvme_iov_md": false 00:14:54.313 }, 00:14:54.313 "memory_domains": [ 00:14:54.313 { 00:14:54.313 "dma_device_id": "system", 00:14:54.313 "dma_device_type": 1 00:14:54.313 }, 00:14:54.313 { 00:14:54.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.313 "dma_device_type": 2 00:14:54.313 } 00:14:54.313 ], 00:14:54.313 "driver_specific": {} 00:14:54.313 } 00:14:54.313 ] 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.313 "name": "Existed_Raid", 00:14:54.313 "uuid": "5eb63a08-2bf2-4667-b838-d756701a09ab", 00:14:54.313 "strip_size_kb": 64, 00:14:54.313 "state": "online", 00:14:54.313 "raid_level": "raid5f", 00:14:54.313 "superblock": false, 00:14:54.313 "num_base_bdevs": 4, 00:14:54.313 "num_base_bdevs_discovered": 4, 00:14:54.313 "num_base_bdevs_operational": 4, 00:14:54.313 "base_bdevs_list": [ 00:14:54.313 { 00:14:54.313 "name": "BaseBdev1", 00:14:54.313 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:54.313 "is_configured": true, 00:14:54.313 "data_offset": 0, 00:14:54.313 "data_size": 65536 00:14:54.313 }, 00:14:54.313 { 00:14:54.313 "name": "BaseBdev2", 00:14:54.313 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:54.313 "is_configured": true, 00:14:54.313 "data_offset": 0, 00:14:54.313 "data_size": 65536 00:14:54.313 }, 00:14:54.313 { 00:14:54.313 "name": "BaseBdev3", 00:14:54.313 "uuid": "d9749003-a4df-4153-9fd3-b4eda41b30df", 00:14:54.313 "is_configured": true, 00:14:54.313 "data_offset": 0, 00:14:54.313 "data_size": 65536 00:14:54.313 }, 00:14:54.313 { 00:14:54.313 "name": "BaseBdev4", 00:14:54.313 "uuid": "6e0c101b-bf3e-40c6-b72c-d75c4c263d18", 00:14:54.313 "is_configured": true, 00:14:54.313 "data_offset": 0, 00:14:54.313 "data_size": 65536 00:14:54.313 } 00:14:54.313 ] 00:14:54.313 }' 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.313 18:55:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.573 [2024-11-28 18:55:24.173735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.833 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.833 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.833 "name": "Existed_Raid", 00:14:54.833 "aliases": [ 00:14:54.833 "5eb63a08-2bf2-4667-b838-d756701a09ab" 00:14:54.833 ], 00:14:54.833 "product_name": "Raid Volume", 00:14:54.833 "block_size": 512, 00:14:54.833 "num_blocks": 196608, 00:14:54.833 "uuid": "5eb63a08-2bf2-4667-b838-d756701a09ab", 00:14:54.833 "assigned_rate_limits": { 00:14:54.833 "rw_ios_per_sec": 0, 00:14:54.833 "rw_mbytes_per_sec": 0, 00:14:54.833 "r_mbytes_per_sec": 0, 00:14:54.833 "w_mbytes_per_sec": 0 00:14:54.833 }, 00:14:54.833 "claimed": false, 00:14:54.833 "zoned": false, 00:14:54.833 "supported_io_types": { 00:14:54.833 "read": true, 00:14:54.833 "write": true, 00:14:54.833 "unmap": false, 00:14:54.833 "flush": false, 00:14:54.833 "reset": true, 00:14:54.833 "nvme_admin": false, 00:14:54.833 "nvme_io": false, 00:14:54.833 "nvme_io_md": false, 00:14:54.833 "write_zeroes": true, 00:14:54.833 "zcopy": false, 00:14:54.833 "get_zone_info": false, 00:14:54.833 "zone_management": false, 00:14:54.833 "zone_append": false, 00:14:54.833 "compare": false, 00:14:54.833 "compare_and_write": false, 00:14:54.834 "abort": false, 00:14:54.834 "seek_hole": false, 00:14:54.834 "seek_data": false, 00:14:54.834 "copy": false, 00:14:54.834 "nvme_iov_md": false 00:14:54.834 }, 00:14:54.834 "driver_specific": { 00:14:54.834 "raid": { 00:14:54.834 "uuid": "5eb63a08-2bf2-4667-b838-d756701a09ab", 00:14:54.834 "strip_size_kb": 64, 00:14:54.834 "state": "online", 00:14:54.834 "raid_level": "raid5f", 00:14:54.834 "superblock": false, 00:14:54.834 "num_base_bdevs": 4, 00:14:54.834 "num_base_bdevs_discovered": 4, 00:14:54.834 "num_base_bdevs_operational": 4, 00:14:54.834 "base_bdevs_list": [ 00:14:54.834 { 00:14:54.834 "name": "BaseBdev1", 00:14:54.834 "uuid": "cb28904f-7965-4cdd-8ff0-2370f04ad523", 00:14:54.834 "is_configured": true, 00:14:54.834 "data_offset": 0, 00:14:54.834 "data_size": 65536 00:14:54.834 }, 00:14:54.834 { 00:14:54.834 "name": "BaseBdev2", 00:14:54.834 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:54.834 "is_configured": true, 00:14:54.834 "data_offset": 0, 00:14:54.834 "data_size": 65536 00:14:54.834 }, 00:14:54.834 { 00:14:54.834 "name": "BaseBdev3", 00:14:54.834 "uuid": "d9749003-a4df-4153-9fd3-b4eda41b30df", 00:14:54.834 "is_configured": true, 00:14:54.834 "data_offset": 0, 00:14:54.834 "data_size": 65536 00:14:54.834 }, 00:14:54.834 { 00:14:54.834 "name": "BaseBdev4", 00:14:54.834 "uuid": "6e0c101b-bf3e-40c6-b72c-d75c4c263d18", 00:14:54.834 "is_configured": true, 00:14:54.834 "data_offset": 0, 00:14:54.834 "data_size": 65536 00:14:54.834 } 00:14:54.834 ] 00:14:54.834 } 00:14:54.834 } 00:14:54.834 }' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:54.834 BaseBdev2 00:14:54.834 BaseBdev3 00:14:54.834 BaseBdev4' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.834 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.094 [2024-11-28 18:55:24.501721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.094 "name": "Existed_Raid", 00:14:55.094 "uuid": "5eb63a08-2bf2-4667-b838-d756701a09ab", 00:14:55.094 "strip_size_kb": 64, 00:14:55.094 "state": "online", 00:14:55.094 "raid_level": "raid5f", 00:14:55.094 "superblock": false, 00:14:55.094 "num_base_bdevs": 4, 00:14:55.094 "num_base_bdevs_discovered": 3, 00:14:55.094 "num_base_bdevs_operational": 3, 00:14:55.094 "base_bdevs_list": [ 00:14:55.094 { 00:14:55.094 "name": null, 00:14:55.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.094 "is_configured": false, 00:14:55.094 "data_offset": 0, 00:14:55.094 "data_size": 65536 00:14:55.094 }, 00:14:55.094 { 00:14:55.094 "name": "BaseBdev2", 00:14:55.094 "uuid": "22311944-ad65-4e06-a12d-d21c24a55162", 00:14:55.094 "is_configured": true, 00:14:55.094 "data_offset": 0, 00:14:55.094 "data_size": 65536 00:14:55.094 }, 00:14:55.094 { 00:14:55.094 "name": "BaseBdev3", 00:14:55.094 "uuid": "d9749003-a4df-4153-9fd3-b4eda41b30df", 00:14:55.094 "is_configured": true, 00:14:55.094 "data_offset": 0, 00:14:55.094 "data_size": 65536 00:14:55.094 }, 00:14:55.094 { 00:14:55.094 "name": "BaseBdev4", 00:14:55.094 "uuid": "6e0c101b-bf3e-40c6-b72c-d75c4c263d18", 00:14:55.094 "is_configured": true, 00:14:55.094 "data_offset": 0, 00:14:55.094 "data_size": 65536 00:14:55.094 } 00:14:55.094 ] 00:14:55.094 }' 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.094 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.353 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 [2024-11-28 18:55:24.997070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.619 [2024-11-28 18:55:24.997216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.619 [2024-11-28 18:55:25.008601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 [2024-11-28 18:55:25.068632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 [2024-11-28 18:55:25.135777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:55.619 [2024-11-28 18:55:25.135874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 BaseBdev2 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 [ 00:14:55.888 { 00:14:55.888 "name": "BaseBdev2", 00:14:55.888 "aliases": [ 00:14:55.888 "b5707524-f391-44ad-8e8d-4c14a7fd01ef" 00:14:55.888 ], 00:14:55.888 "product_name": "Malloc disk", 00:14:55.888 "block_size": 512, 00:14:55.888 "num_blocks": 65536, 00:14:55.888 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:55.888 "assigned_rate_limits": { 00:14:55.888 "rw_ios_per_sec": 0, 00:14:55.888 "rw_mbytes_per_sec": 0, 00:14:55.888 "r_mbytes_per_sec": 0, 00:14:55.888 "w_mbytes_per_sec": 0 00:14:55.888 }, 00:14:55.888 "claimed": false, 00:14:55.888 "zoned": false, 00:14:55.888 "supported_io_types": { 00:14:55.888 "read": true, 00:14:55.888 "write": true, 00:14:55.888 "unmap": true, 00:14:55.888 "flush": true, 00:14:55.888 "reset": true, 00:14:55.888 "nvme_admin": false, 00:14:55.888 "nvme_io": false, 00:14:55.888 "nvme_io_md": false, 00:14:55.888 "write_zeroes": true, 00:14:55.888 "zcopy": true, 00:14:55.888 "get_zone_info": false, 00:14:55.888 "zone_management": false, 00:14:55.888 "zone_append": false, 00:14:55.888 "compare": false, 00:14:55.888 "compare_and_write": false, 00:14:55.888 "abort": true, 00:14:55.888 "seek_hole": false, 00:14:55.888 "seek_data": false, 00:14:55.888 "copy": true, 00:14:55.888 "nvme_iov_md": false 00:14:55.888 }, 00:14:55.888 "memory_domains": [ 00:14:55.888 { 00:14:55.888 "dma_device_id": "system", 00:14:55.888 "dma_device_type": 1 00:14:55.888 }, 00:14:55.888 { 00:14:55.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.888 "dma_device_type": 2 00:14:55.888 } 00:14:55.888 ], 00:14:55.888 "driver_specific": {} 00:14:55.888 } 00:14:55.888 ] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 BaseBdev3 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 [ 00:14:55.888 { 00:14:55.888 "name": "BaseBdev3", 00:14:55.888 "aliases": [ 00:14:55.888 "6a489a86-e5b8-4d7a-be37-abce30fd933b" 00:14:55.888 ], 00:14:55.888 "product_name": "Malloc disk", 00:14:55.888 "block_size": 512, 00:14:55.888 "num_blocks": 65536, 00:14:55.888 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:55.888 "assigned_rate_limits": { 00:14:55.888 "rw_ios_per_sec": 0, 00:14:55.888 "rw_mbytes_per_sec": 0, 00:14:55.888 "r_mbytes_per_sec": 0, 00:14:55.888 "w_mbytes_per_sec": 0 00:14:55.888 }, 00:14:55.888 "claimed": false, 00:14:55.888 "zoned": false, 00:14:55.888 "supported_io_types": { 00:14:55.888 "read": true, 00:14:55.888 "write": true, 00:14:55.888 "unmap": true, 00:14:55.888 "flush": true, 00:14:55.888 "reset": true, 00:14:55.888 "nvme_admin": false, 00:14:55.888 "nvme_io": false, 00:14:55.888 "nvme_io_md": false, 00:14:55.888 "write_zeroes": true, 00:14:55.888 "zcopy": true, 00:14:55.888 "get_zone_info": false, 00:14:55.888 "zone_management": false, 00:14:55.888 "zone_append": false, 00:14:55.888 "compare": false, 00:14:55.888 "compare_and_write": false, 00:14:55.888 "abort": true, 00:14:55.888 "seek_hole": false, 00:14:55.888 "seek_data": false, 00:14:55.888 "copy": true, 00:14:55.888 "nvme_iov_md": false 00:14:55.888 }, 00:14:55.888 "memory_domains": [ 00:14:55.888 { 00:14:55.888 "dma_device_id": "system", 00:14:55.888 "dma_device_type": 1 00:14:55.888 }, 00:14:55.888 { 00:14:55.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.888 "dma_device_type": 2 00:14:55.888 } 00:14:55.888 ], 00:14:55.888 "driver_specific": {} 00:14:55.888 } 00:14:55.888 ] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 BaseBdev4 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.888 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.888 [ 00:14:55.888 { 00:14:55.888 "name": "BaseBdev4", 00:14:55.888 "aliases": [ 00:14:55.888 "7158c087-c733-4e2a-9a2f-aacc38abf53d" 00:14:55.888 ], 00:14:55.888 "product_name": "Malloc disk", 00:14:55.888 "block_size": 512, 00:14:55.888 "num_blocks": 65536, 00:14:55.888 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:55.888 "assigned_rate_limits": { 00:14:55.888 "rw_ios_per_sec": 0, 00:14:55.888 "rw_mbytes_per_sec": 0, 00:14:55.888 "r_mbytes_per_sec": 0, 00:14:55.888 "w_mbytes_per_sec": 0 00:14:55.888 }, 00:14:55.888 "claimed": false, 00:14:55.888 "zoned": false, 00:14:55.888 "supported_io_types": { 00:14:55.888 "read": true, 00:14:55.888 "write": true, 00:14:55.888 "unmap": true, 00:14:55.888 "flush": true, 00:14:55.888 "reset": true, 00:14:55.888 "nvme_admin": false, 00:14:55.888 "nvme_io": false, 00:14:55.888 "nvme_io_md": false, 00:14:55.888 "write_zeroes": true, 00:14:55.888 "zcopy": true, 00:14:55.888 "get_zone_info": false, 00:14:55.889 "zone_management": false, 00:14:55.889 "zone_append": false, 00:14:55.889 "compare": false, 00:14:55.889 "compare_and_write": false, 00:14:55.889 "abort": true, 00:14:55.889 "seek_hole": false, 00:14:55.889 "seek_data": false, 00:14:55.889 "copy": true, 00:14:55.889 "nvme_iov_md": false 00:14:55.889 }, 00:14:55.889 "memory_domains": [ 00:14:55.889 { 00:14:55.889 "dma_device_id": "system", 00:14:55.889 "dma_device_type": 1 00:14:55.889 }, 00:14:55.889 { 00:14:55.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.889 "dma_device_type": 2 00:14:55.889 } 00:14:55.889 ], 00:14:55.889 "driver_specific": {} 00:14:55.889 } 00:14:55.889 ] 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.889 [2024-11-28 18:55:25.351202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.889 [2024-11-28 18:55:25.351304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.889 [2024-11-28 18:55:25.351341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.889 [2024-11-28 18:55:25.353254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.889 [2024-11-28 18:55:25.353347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.889 "name": "Existed_Raid", 00:14:55.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.889 "strip_size_kb": 64, 00:14:55.889 "state": "configuring", 00:14:55.889 "raid_level": "raid5f", 00:14:55.889 "superblock": false, 00:14:55.889 "num_base_bdevs": 4, 00:14:55.889 "num_base_bdevs_discovered": 3, 00:14:55.889 "num_base_bdevs_operational": 4, 00:14:55.889 "base_bdevs_list": [ 00:14:55.889 { 00:14:55.889 "name": "BaseBdev1", 00:14:55.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.889 "is_configured": false, 00:14:55.889 "data_offset": 0, 00:14:55.889 "data_size": 0 00:14:55.889 }, 00:14:55.889 { 00:14:55.889 "name": "BaseBdev2", 00:14:55.889 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:55.889 "is_configured": true, 00:14:55.889 "data_offset": 0, 00:14:55.889 "data_size": 65536 00:14:55.889 }, 00:14:55.889 { 00:14:55.889 "name": "BaseBdev3", 00:14:55.889 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:55.889 "is_configured": true, 00:14:55.889 "data_offset": 0, 00:14:55.889 "data_size": 65536 00:14:55.889 }, 00:14:55.889 { 00:14:55.889 "name": "BaseBdev4", 00:14:55.889 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:55.889 "is_configured": true, 00:14:55.889 "data_offset": 0, 00:14:55.889 "data_size": 65536 00:14:55.889 } 00:14:55.889 ] 00:14:55.889 }' 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.889 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 [2024-11-28 18:55:25.803297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.463 "name": "Existed_Raid", 00:14:56.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.463 "strip_size_kb": 64, 00:14:56.463 "state": "configuring", 00:14:56.463 "raid_level": "raid5f", 00:14:56.463 "superblock": false, 00:14:56.463 "num_base_bdevs": 4, 00:14:56.463 "num_base_bdevs_discovered": 2, 00:14:56.463 "num_base_bdevs_operational": 4, 00:14:56.463 "base_bdevs_list": [ 00:14:56.463 { 00:14:56.463 "name": "BaseBdev1", 00:14:56.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.463 "is_configured": false, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 0 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": null, 00:14:56.463 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:56.463 "is_configured": false, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev3", 00:14:56.463 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev4", 00:14:56.463 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 } 00:14:56.463 ] 00:14:56.463 }' 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.463 18:55:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.724 [2024-11-28 18:55:26.270452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.724 BaseBdev1 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.724 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.725 [ 00:14:56.725 { 00:14:56.725 "name": "BaseBdev1", 00:14:56.725 "aliases": [ 00:14:56.725 "df103e0d-78e8-473f-bf44-78b00de3479a" 00:14:56.725 ], 00:14:56.725 "product_name": "Malloc disk", 00:14:56.725 "block_size": 512, 00:14:56.725 "num_blocks": 65536, 00:14:56.725 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:56.725 "assigned_rate_limits": { 00:14:56.725 "rw_ios_per_sec": 0, 00:14:56.725 "rw_mbytes_per_sec": 0, 00:14:56.725 "r_mbytes_per_sec": 0, 00:14:56.725 "w_mbytes_per_sec": 0 00:14:56.725 }, 00:14:56.725 "claimed": true, 00:14:56.725 "claim_type": "exclusive_write", 00:14:56.725 "zoned": false, 00:14:56.725 "supported_io_types": { 00:14:56.725 "read": true, 00:14:56.725 "write": true, 00:14:56.725 "unmap": true, 00:14:56.725 "flush": true, 00:14:56.725 "reset": true, 00:14:56.725 "nvme_admin": false, 00:14:56.725 "nvme_io": false, 00:14:56.725 "nvme_io_md": false, 00:14:56.725 "write_zeroes": true, 00:14:56.725 "zcopy": true, 00:14:56.725 "get_zone_info": false, 00:14:56.725 "zone_management": false, 00:14:56.725 "zone_append": false, 00:14:56.725 "compare": false, 00:14:56.725 "compare_and_write": false, 00:14:56.725 "abort": true, 00:14:56.725 "seek_hole": false, 00:14:56.725 "seek_data": false, 00:14:56.725 "copy": true, 00:14:56.725 "nvme_iov_md": false 00:14:56.725 }, 00:14:56.725 "memory_domains": [ 00:14:56.725 { 00:14:56.725 "dma_device_id": "system", 00:14:56.725 "dma_device_type": 1 00:14:56.725 }, 00:14:56.725 { 00:14:56.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.725 "dma_device_type": 2 00:14:56.725 } 00:14:56.725 ], 00:14:56.725 "driver_specific": {} 00:14:56.725 } 00:14:56.725 ] 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.725 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.985 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.985 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.985 "name": "Existed_Raid", 00:14:56.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.985 "strip_size_kb": 64, 00:14:56.985 "state": "configuring", 00:14:56.985 "raid_level": "raid5f", 00:14:56.985 "superblock": false, 00:14:56.985 "num_base_bdevs": 4, 00:14:56.985 "num_base_bdevs_discovered": 3, 00:14:56.985 "num_base_bdevs_operational": 4, 00:14:56.985 "base_bdevs_list": [ 00:14:56.985 { 00:14:56.985 "name": "BaseBdev1", 00:14:56.985 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:56.985 "is_configured": true, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 65536 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": null, 00:14:56.985 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:56.985 "is_configured": false, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 65536 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": "BaseBdev3", 00:14:56.985 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:56.985 "is_configured": true, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 65536 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": "BaseBdev4", 00:14:56.985 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:56.985 "is_configured": true, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 65536 00:14:56.985 } 00:14:56.985 ] 00:14:56.985 }' 00:14:56.985 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.985 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.245 [2024-11-28 18:55:26.826639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.245 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.506 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.506 "name": "Existed_Raid", 00:14:57.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.506 "strip_size_kb": 64, 00:14:57.506 "state": "configuring", 00:14:57.506 "raid_level": "raid5f", 00:14:57.506 "superblock": false, 00:14:57.506 "num_base_bdevs": 4, 00:14:57.506 "num_base_bdevs_discovered": 2, 00:14:57.506 "num_base_bdevs_operational": 4, 00:14:57.506 "base_bdevs_list": [ 00:14:57.506 { 00:14:57.506 "name": "BaseBdev1", 00:14:57.506 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:57.506 "is_configured": true, 00:14:57.506 "data_offset": 0, 00:14:57.506 "data_size": 65536 00:14:57.506 }, 00:14:57.506 { 00:14:57.506 "name": null, 00:14:57.506 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:57.506 "is_configured": false, 00:14:57.506 "data_offset": 0, 00:14:57.506 "data_size": 65536 00:14:57.506 }, 00:14:57.506 { 00:14:57.506 "name": null, 00:14:57.506 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:57.506 "is_configured": false, 00:14:57.506 "data_offset": 0, 00:14:57.506 "data_size": 65536 00:14:57.506 }, 00:14:57.506 { 00:14:57.506 "name": "BaseBdev4", 00:14:57.506 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:57.506 "is_configured": true, 00:14:57.506 "data_offset": 0, 00:14:57.506 "data_size": 65536 00:14:57.506 } 00:14:57.506 ] 00:14:57.506 }' 00:14:57.506 18:55:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.506 18:55:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.766 [2024-11-28 18:55:27.338798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.766 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.026 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.026 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.026 "name": "Existed_Raid", 00:14:58.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.026 "strip_size_kb": 64, 00:14:58.026 "state": "configuring", 00:14:58.026 "raid_level": "raid5f", 00:14:58.026 "superblock": false, 00:14:58.026 "num_base_bdevs": 4, 00:14:58.026 "num_base_bdevs_discovered": 3, 00:14:58.026 "num_base_bdevs_operational": 4, 00:14:58.026 "base_bdevs_list": [ 00:14:58.026 { 00:14:58.026 "name": "BaseBdev1", 00:14:58.026 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:58.026 "is_configured": true, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 65536 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": null, 00:14:58.026 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:58.026 "is_configured": false, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 65536 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": "BaseBdev3", 00:14:58.026 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:58.026 "is_configured": true, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 65536 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": "BaseBdev4", 00:14:58.026 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:58.026 "is_configured": true, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 65536 00:14:58.026 } 00:14:58.026 ] 00:14:58.026 }' 00:14:58.026 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.026 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.287 [2024-11-28 18:55:27.822939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.287 "name": "Existed_Raid", 00:14:58.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.287 "strip_size_kb": 64, 00:14:58.287 "state": "configuring", 00:14:58.287 "raid_level": "raid5f", 00:14:58.287 "superblock": false, 00:14:58.287 "num_base_bdevs": 4, 00:14:58.287 "num_base_bdevs_discovered": 2, 00:14:58.287 "num_base_bdevs_operational": 4, 00:14:58.287 "base_bdevs_list": [ 00:14:58.287 { 00:14:58.287 "name": null, 00:14:58.287 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:58.287 "is_configured": false, 00:14:58.287 "data_offset": 0, 00:14:58.287 "data_size": 65536 00:14:58.287 }, 00:14:58.287 { 00:14:58.287 "name": null, 00:14:58.287 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:58.287 "is_configured": false, 00:14:58.287 "data_offset": 0, 00:14:58.287 "data_size": 65536 00:14:58.287 }, 00:14:58.287 { 00:14:58.287 "name": "BaseBdev3", 00:14:58.287 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:58.287 "is_configured": true, 00:14:58.287 "data_offset": 0, 00:14:58.287 "data_size": 65536 00:14:58.287 }, 00:14:58.287 { 00:14:58.287 "name": "BaseBdev4", 00:14:58.287 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:58.287 "is_configured": true, 00:14:58.287 "data_offset": 0, 00:14:58.287 "data_size": 65536 00:14:58.287 } 00:14:58.287 ] 00:14:58.287 }' 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.287 18:55:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.858 [2024-11-28 18:55:28.277584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.858 "name": "Existed_Raid", 00:14:58.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.858 "strip_size_kb": 64, 00:14:58.858 "state": "configuring", 00:14:58.858 "raid_level": "raid5f", 00:14:58.858 "superblock": false, 00:14:58.858 "num_base_bdevs": 4, 00:14:58.858 "num_base_bdevs_discovered": 3, 00:14:58.858 "num_base_bdevs_operational": 4, 00:14:58.858 "base_bdevs_list": [ 00:14:58.858 { 00:14:58.858 "name": null, 00:14:58.858 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:58.858 "is_configured": false, 00:14:58.858 "data_offset": 0, 00:14:58.858 "data_size": 65536 00:14:58.858 }, 00:14:58.858 { 00:14:58.858 "name": "BaseBdev2", 00:14:58.858 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:58.858 "is_configured": true, 00:14:58.858 "data_offset": 0, 00:14:58.858 "data_size": 65536 00:14:58.858 }, 00:14:58.858 { 00:14:58.858 "name": "BaseBdev3", 00:14:58.858 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:58.858 "is_configured": true, 00:14:58.858 "data_offset": 0, 00:14:58.858 "data_size": 65536 00:14:58.858 }, 00:14:58.858 { 00:14:58.858 "name": "BaseBdev4", 00:14:58.858 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:58.858 "is_configured": true, 00:14:58.858 "data_offset": 0, 00:14:58.858 "data_size": 65536 00:14:58.858 } 00:14:58.858 ] 00:14:58.858 }' 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.858 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.118 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.118 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.118 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.118 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.118 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df103e0d-78e8-473f-bf44-78b00de3479a 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.378 [2024-11-28 18:55:28.780692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:59.378 [2024-11-28 18:55:28.780795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:59.378 [2024-11-28 18:55:28.780823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:59.378 [2024-11-28 18:55:28.781085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:59.378 [2024-11-28 18:55:28.781617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:59.378 [2024-11-28 18:55:28.781666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:59.378 [2024-11-28 18:55:28.781878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.378 NewBaseBdev 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.378 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.378 [ 00:14:59.378 { 00:14:59.378 "name": "NewBaseBdev", 00:14:59.378 "aliases": [ 00:14:59.378 "df103e0d-78e8-473f-bf44-78b00de3479a" 00:14:59.378 ], 00:14:59.378 "product_name": "Malloc disk", 00:14:59.378 "block_size": 512, 00:14:59.378 "num_blocks": 65536, 00:14:59.378 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:59.378 "assigned_rate_limits": { 00:14:59.378 "rw_ios_per_sec": 0, 00:14:59.378 "rw_mbytes_per_sec": 0, 00:14:59.378 "r_mbytes_per_sec": 0, 00:14:59.378 "w_mbytes_per_sec": 0 00:14:59.378 }, 00:14:59.378 "claimed": true, 00:14:59.378 "claim_type": "exclusive_write", 00:14:59.378 "zoned": false, 00:14:59.378 "supported_io_types": { 00:14:59.378 "read": true, 00:14:59.378 "write": true, 00:14:59.378 "unmap": true, 00:14:59.379 "flush": true, 00:14:59.379 "reset": true, 00:14:59.379 "nvme_admin": false, 00:14:59.379 "nvme_io": false, 00:14:59.379 "nvme_io_md": false, 00:14:59.379 "write_zeroes": true, 00:14:59.379 "zcopy": true, 00:14:59.379 "get_zone_info": false, 00:14:59.379 "zone_management": false, 00:14:59.379 "zone_append": false, 00:14:59.379 "compare": false, 00:14:59.379 "compare_and_write": false, 00:14:59.379 "abort": true, 00:14:59.379 "seek_hole": false, 00:14:59.379 "seek_data": false, 00:14:59.379 "copy": true, 00:14:59.379 "nvme_iov_md": false 00:14:59.379 }, 00:14:59.379 "memory_domains": [ 00:14:59.379 { 00:14:59.379 "dma_device_id": "system", 00:14:59.379 "dma_device_type": 1 00:14:59.379 }, 00:14:59.379 { 00:14:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.379 "dma_device_type": 2 00:14:59.379 } 00:14:59.379 ], 00:14:59.379 "driver_specific": {} 00:14:59.379 } 00:14:59.379 ] 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.379 "name": "Existed_Raid", 00:14:59.379 "uuid": "e9bb7b8d-91b5-4852-abd8-48540031e5b0", 00:14:59.379 "strip_size_kb": 64, 00:14:59.379 "state": "online", 00:14:59.379 "raid_level": "raid5f", 00:14:59.379 "superblock": false, 00:14:59.379 "num_base_bdevs": 4, 00:14:59.379 "num_base_bdevs_discovered": 4, 00:14:59.379 "num_base_bdevs_operational": 4, 00:14:59.379 "base_bdevs_list": [ 00:14:59.379 { 00:14:59.379 "name": "NewBaseBdev", 00:14:59.379 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:59.379 "is_configured": true, 00:14:59.379 "data_offset": 0, 00:14:59.379 "data_size": 65536 00:14:59.379 }, 00:14:59.379 { 00:14:59.379 "name": "BaseBdev2", 00:14:59.379 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:59.379 "is_configured": true, 00:14:59.379 "data_offset": 0, 00:14:59.379 "data_size": 65536 00:14:59.379 }, 00:14:59.379 { 00:14:59.379 "name": "BaseBdev3", 00:14:59.379 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:59.379 "is_configured": true, 00:14:59.379 "data_offset": 0, 00:14:59.379 "data_size": 65536 00:14:59.379 }, 00:14:59.379 { 00:14:59.379 "name": "BaseBdev4", 00:14:59.379 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:59.379 "is_configured": true, 00:14:59.379 "data_offset": 0, 00:14:59.379 "data_size": 65536 00:14:59.379 } 00:14:59.379 ] 00:14:59.379 }' 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.379 18:55:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.948 [2024-11-28 18:55:29.301065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.948 "name": "Existed_Raid", 00:14:59.948 "aliases": [ 00:14:59.948 "e9bb7b8d-91b5-4852-abd8-48540031e5b0" 00:14:59.948 ], 00:14:59.948 "product_name": "Raid Volume", 00:14:59.948 "block_size": 512, 00:14:59.948 "num_blocks": 196608, 00:14:59.948 "uuid": "e9bb7b8d-91b5-4852-abd8-48540031e5b0", 00:14:59.948 "assigned_rate_limits": { 00:14:59.948 "rw_ios_per_sec": 0, 00:14:59.948 "rw_mbytes_per_sec": 0, 00:14:59.948 "r_mbytes_per_sec": 0, 00:14:59.948 "w_mbytes_per_sec": 0 00:14:59.948 }, 00:14:59.948 "claimed": false, 00:14:59.948 "zoned": false, 00:14:59.948 "supported_io_types": { 00:14:59.948 "read": true, 00:14:59.948 "write": true, 00:14:59.948 "unmap": false, 00:14:59.948 "flush": false, 00:14:59.948 "reset": true, 00:14:59.948 "nvme_admin": false, 00:14:59.948 "nvme_io": false, 00:14:59.948 "nvme_io_md": false, 00:14:59.948 "write_zeroes": true, 00:14:59.948 "zcopy": false, 00:14:59.948 "get_zone_info": false, 00:14:59.948 "zone_management": false, 00:14:59.948 "zone_append": false, 00:14:59.948 "compare": false, 00:14:59.948 "compare_and_write": false, 00:14:59.948 "abort": false, 00:14:59.948 "seek_hole": false, 00:14:59.948 "seek_data": false, 00:14:59.948 "copy": false, 00:14:59.948 "nvme_iov_md": false 00:14:59.948 }, 00:14:59.948 "driver_specific": { 00:14:59.948 "raid": { 00:14:59.948 "uuid": "e9bb7b8d-91b5-4852-abd8-48540031e5b0", 00:14:59.948 "strip_size_kb": 64, 00:14:59.948 "state": "online", 00:14:59.948 "raid_level": "raid5f", 00:14:59.948 "superblock": false, 00:14:59.948 "num_base_bdevs": 4, 00:14:59.948 "num_base_bdevs_discovered": 4, 00:14:59.948 "num_base_bdevs_operational": 4, 00:14:59.948 "base_bdevs_list": [ 00:14:59.948 { 00:14:59.948 "name": "NewBaseBdev", 00:14:59.948 "uuid": "df103e0d-78e8-473f-bf44-78b00de3479a", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 65536 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev2", 00:14:59.948 "uuid": "b5707524-f391-44ad-8e8d-4c14a7fd01ef", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 65536 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev3", 00:14:59.948 "uuid": "6a489a86-e5b8-4d7a-be37-abce30fd933b", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 65536 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev4", 00:14:59.948 "uuid": "7158c087-c733-4e2a-9a2f-aacc38abf53d", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 65536 00:14:59.948 } 00:14:59.948 ] 00:14:59.948 } 00:14:59.948 } 00:14:59.948 }' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.948 BaseBdev2 00:14:59.948 BaseBdev3 00:14:59.948 BaseBdev4' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.948 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.206 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.206 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.207 [2024-11-28 18:55:29.620947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.207 [2024-11-28 18:55:29.621011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.207 [2024-11-28 18:55:29.621081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.207 [2024-11-28 18:55:29.621329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.207 [2024-11-28 18:55:29.621348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94738 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 94738 ']' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 94738 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94738 00:15:00.207 killing process with pid 94738 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94738' 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 94738 00:15:00.207 [2024-11-28 18:55:29.661715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.207 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 94738 00:15:00.207 [2024-11-28 18:55:29.701572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.465 18:55:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:00.465 00:15:00.465 real 0m9.594s 00:15:00.465 user 0m16.399s 00:15:00.465 sys 0m2.132s 00:15:00.465 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.465 18:55:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.465 ************************************ 00:15:00.465 END TEST raid5f_state_function_test 00:15:00.465 ************************************ 00:15:00.465 18:55:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:00.465 18:55:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:00.465 18:55:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.465 18:55:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.465 ************************************ 00:15:00.465 START TEST raid5f_state_function_test_sb 00:15:00.465 ************************************ 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:00.465 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95382 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95382' 00:15:00.466 Process raid pid: 95382 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95382 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95382 ']' 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.466 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.725 [2024-11-28 18:55:30.122833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:00.725 [2024-11-28 18:55:30.123637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.725 [2024-11-28 18:55:30.266214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.725 [2024-11-28 18:55:30.302624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.983 [2024-11-28 18:55:30.328800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.983 [2024-11-28 18:55:30.371835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.983 [2024-11-28 18:55:30.371882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.551 [2024-11-28 18:55:30.928030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.551 [2024-11-28 18:55:30.928085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.551 [2024-11-28 18:55:30.928097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.551 [2024-11-28 18:55:30.928105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.551 [2024-11-28 18:55:30.928115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.551 [2024-11-28 18:55:30.928121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.551 [2024-11-28 18:55:30.928128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.551 [2024-11-28 18:55:30.928134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.551 "name": "Existed_Raid", 00:15:01.551 "uuid": "ffafca58-0f34-454c-90cd-c22167dfdf38", 00:15:01.551 "strip_size_kb": 64, 00:15:01.551 "state": "configuring", 00:15:01.551 "raid_level": "raid5f", 00:15:01.551 "superblock": true, 00:15:01.551 "num_base_bdevs": 4, 00:15:01.551 "num_base_bdevs_discovered": 0, 00:15:01.551 "num_base_bdevs_operational": 4, 00:15:01.551 "base_bdevs_list": [ 00:15:01.551 { 00:15:01.551 "name": "BaseBdev1", 00:15:01.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.551 "is_configured": false, 00:15:01.551 "data_offset": 0, 00:15:01.551 "data_size": 0 00:15:01.551 }, 00:15:01.551 { 00:15:01.551 "name": "BaseBdev2", 00:15:01.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.551 "is_configured": false, 00:15:01.551 "data_offset": 0, 00:15:01.551 "data_size": 0 00:15:01.551 }, 00:15:01.551 { 00:15:01.551 "name": "BaseBdev3", 00:15:01.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.551 "is_configured": false, 00:15:01.551 "data_offset": 0, 00:15:01.551 "data_size": 0 00:15:01.551 }, 00:15:01.551 { 00:15:01.551 "name": "BaseBdev4", 00:15:01.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.551 "is_configured": false, 00:15:01.551 "data_offset": 0, 00:15:01.551 "data_size": 0 00:15:01.551 } 00:15:01.551 ] 00:15:01.551 }' 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.551 18:55:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 [2024-11-28 18:55:31.364045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.811 [2024-11-28 18:55:31.364121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 [2024-11-28 18:55:31.372087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.811 [2024-11-28 18:55:31.372159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.811 [2024-11-28 18:55:31.372191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.811 [2024-11-28 18:55:31.372211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.811 [2024-11-28 18:55:31.372231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.811 [2024-11-28 18:55:31.372249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.811 [2024-11-28 18:55:31.372268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.811 [2024-11-28 18:55:31.372286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 [2024-11-28 18:55:31.389006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.811 BaseBdev1 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.811 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 [ 00:15:01.811 { 00:15:01.811 "name": "BaseBdev1", 00:15:02.070 "aliases": [ 00:15:02.070 "9e34cebf-0e4f-4221-983a-1ea300bb498e" 00:15:02.070 ], 00:15:02.070 "product_name": "Malloc disk", 00:15:02.070 "block_size": 512, 00:15:02.070 "num_blocks": 65536, 00:15:02.070 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:02.070 "assigned_rate_limits": { 00:15:02.070 "rw_ios_per_sec": 0, 00:15:02.070 "rw_mbytes_per_sec": 0, 00:15:02.070 "r_mbytes_per_sec": 0, 00:15:02.070 "w_mbytes_per_sec": 0 00:15:02.070 }, 00:15:02.070 "claimed": true, 00:15:02.070 "claim_type": "exclusive_write", 00:15:02.070 "zoned": false, 00:15:02.070 "supported_io_types": { 00:15:02.070 "read": true, 00:15:02.070 "write": true, 00:15:02.070 "unmap": true, 00:15:02.070 "flush": true, 00:15:02.070 "reset": true, 00:15:02.070 "nvme_admin": false, 00:15:02.070 "nvme_io": false, 00:15:02.070 "nvme_io_md": false, 00:15:02.070 "write_zeroes": true, 00:15:02.070 "zcopy": true, 00:15:02.070 "get_zone_info": false, 00:15:02.070 "zone_management": false, 00:15:02.070 "zone_append": false, 00:15:02.070 "compare": false, 00:15:02.070 "compare_and_write": false, 00:15:02.070 "abort": true, 00:15:02.070 "seek_hole": false, 00:15:02.070 "seek_data": false, 00:15:02.070 "copy": true, 00:15:02.070 "nvme_iov_md": false 00:15:02.070 }, 00:15:02.070 "memory_domains": [ 00:15:02.070 { 00:15:02.070 "dma_device_id": "system", 00:15:02.070 "dma_device_type": 1 00:15:02.070 }, 00:15:02.070 { 00:15:02.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.070 "dma_device_type": 2 00:15:02.070 } 00:15:02.070 ], 00:15:02.070 "driver_specific": {} 00:15:02.070 } 00:15:02.070 ] 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.070 "name": "Existed_Raid", 00:15:02.070 "uuid": "02c17cfb-b7b0-4c5b-938d-2e3303596c91", 00:15:02.070 "strip_size_kb": 64, 00:15:02.070 "state": "configuring", 00:15:02.070 "raid_level": "raid5f", 00:15:02.070 "superblock": true, 00:15:02.070 "num_base_bdevs": 4, 00:15:02.070 "num_base_bdevs_discovered": 1, 00:15:02.070 "num_base_bdevs_operational": 4, 00:15:02.070 "base_bdevs_list": [ 00:15:02.070 { 00:15:02.070 "name": "BaseBdev1", 00:15:02.070 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:02.070 "is_configured": true, 00:15:02.070 "data_offset": 2048, 00:15:02.070 "data_size": 63488 00:15:02.070 }, 00:15:02.070 { 00:15:02.070 "name": "BaseBdev2", 00:15:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.070 "is_configured": false, 00:15:02.070 "data_offset": 0, 00:15:02.070 "data_size": 0 00:15:02.070 }, 00:15:02.070 { 00:15:02.070 "name": "BaseBdev3", 00:15:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.070 "is_configured": false, 00:15:02.070 "data_offset": 0, 00:15:02.070 "data_size": 0 00:15:02.070 }, 00:15:02.070 { 00:15:02.070 "name": "BaseBdev4", 00:15:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.070 "is_configured": false, 00:15:02.070 "data_offset": 0, 00:15:02.070 "data_size": 0 00:15:02.070 } 00:15:02.070 ] 00:15:02.070 }' 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.070 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.328 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:02.328 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.328 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.328 [2024-11-28 18:55:31.857134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.329 [2024-11-28 18:55:31.857192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.329 [2024-11-28 18:55:31.865203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.329 [2024-11-28 18:55:31.866939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.329 [2024-11-28 18:55:31.866972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.329 [2024-11-28 18:55:31.866982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.329 [2024-11-28 18:55:31.866989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.329 [2024-11-28 18:55:31.866996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:02.329 [2024-11-28 18:55:31.867003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.329 "name": "Existed_Raid", 00:15:02.329 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:02.329 "strip_size_kb": 64, 00:15:02.329 "state": "configuring", 00:15:02.329 "raid_level": "raid5f", 00:15:02.329 "superblock": true, 00:15:02.329 "num_base_bdevs": 4, 00:15:02.329 "num_base_bdevs_discovered": 1, 00:15:02.329 "num_base_bdevs_operational": 4, 00:15:02.329 "base_bdevs_list": [ 00:15:02.329 { 00:15:02.329 "name": "BaseBdev1", 00:15:02.329 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:02.329 "is_configured": true, 00:15:02.329 "data_offset": 2048, 00:15:02.329 "data_size": 63488 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev2", 00:15:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.329 "is_configured": false, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 0 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev3", 00:15:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.329 "is_configured": false, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 0 00:15:02.329 }, 00:15:02.329 { 00:15:02.329 "name": "BaseBdev4", 00:15:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.329 "is_configured": false, 00:15:02.329 "data_offset": 0, 00:15:02.329 "data_size": 0 00:15:02.329 } 00:15:02.329 ] 00:15:02.329 }' 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.329 18:55:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 [2024-11-28 18:55:32.348446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.899 BaseBdev2 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 [ 00:15:02.899 { 00:15:02.899 "name": "BaseBdev2", 00:15:02.899 "aliases": [ 00:15:02.899 "d2f94ab0-ab48-4648-958e-57a112a42132" 00:15:02.899 ], 00:15:02.899 "product_name": "Malloc disk", 00:15:02.899 "block_size": 512, 00:15:02.899 "num_blocks": 65536, 00:15:02.899 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:02.899 "assigned_rate_limits": { 00:15:02.899 "rw_ios_per_sec": 0, 00:15:02.899 "rw_mbytes_per_sec": 0, 00:15:02.899 "r_mbytes_per_sec": 0, 00:15:02.899 "w_mbytes_per_sec": 0 00:15:02.899 }, 00:15:02.899 "claimed": true, 00:15:02.899 "claim_type": "exclusive_write", 00:15:02.899 "zoned": false, 00:15:02.899 "supported_io_types": { 00:15:02.899 "read": true, 00:15:02.899 "write": true, 00:15:02.899 "unmap": true, 00:15:02.899 "flush": true, 00:15:02.899 "reset": true, 00:15:02.899 "nvme_admin": false, 00:15:02.899 "nvme_io": false, 00:15:02.899 "nvme_io_md": false, 00:15:02.899 "write_zeroes": true, 00:15:02.899 "zcopy": true, 00:15:02.899 "get_zone_info": false, 00:15:02.899 "zone_management": false, 00:15:02.899 "zone_append": false, 00:15:02.899 "compare": false, 00:15:02.899 "compare_and_write": false, 00:15:02.899 "abort": true, 00:15:02.899 "seek_hole": false, 00:15:02.899 "seek_data": false, 00:15:02.899 "copy": true, 00:15:02.899 "nvme_iov_md": false 00:15:02.899 }, 00:15:02.899 "memory_domains": [ 00:15:02.899 { 00:15:02.899 "dma_device_id": "system", 00:15:02.899 "dma_device_type": 1 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.899 "dma_device_type": 2 00:15:02.899 } 00:15:02.899 ], 00:15:02.899 "driver_specific": {} 00:15:02.899 } 00:15:02.899 ] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.899 "name": "Existed_Raid", 00:15:02.899 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:02.899 "strip_size_kb": 64, 00:15:02.899 "state": "configuring", 00:15:02.899 "raid_level": "raid5f", 00:15:02.899 "superblock": true, 00:15:02.899 "num_base_bdevs": 4, 00:15:02.899 "num_base_bdevs_discovered": 2, 00:15:02.899 "num_base_bdevs_operational": 4, 00:15:02.899 "base_bdevs_list": [ 00:15:02.899 { 00:15:02.899 "name": "BaseBdev1", 00:15:02.899 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:02.899 "is_configured": true, 00:15:02.899 "data_offset": 2048, 00:15:02.899 "data_size": 63488 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "name": "BaseBdev2", 00:15:02.899 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:02.899 "is_configured": true, 00:15:02.899 "data_offset": 2048, 00:15:02.899 "data_size": 63488 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "name": "BaseBdev3", 00:15:02.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.899 "is_configured": false, 00:15:02.899 "data_offset": 0, 00:15:02.899 "data_size": 0 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "name": "BaseBdev4", 00:15:02.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.899 "is_configured": false, 00:15:02.899 "data_offset": 0, 00:15:02.899 "data_size": 0 00:15:02.899 } 00:15:02.899 ] 00:15:02.899 }' 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.899 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.469 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.469 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.469 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.469 BaseBdev3 00:15:03.470 [2024-11-28 18:55:32.818934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 [ 00:15:03.470 { 00:15:03.470 "name": "BaseBdev3", 00:15:03.470 "aliases": [ 00:15:03.470 "bc66fec0-fc77-47e1-85dd-66dc8233ded4" 00:15:03.470 ], 00:15:03.470 "product_name": "Malloc disk", 00:15:03.470 "block_size": 512, 00:15:03.470 "num_blocks": 65536, 00:15:03.470 "uuid": "bc66fec0-fc77-47e1-85dd-66dc8233ded4", 00:15:03.470 "assigned_rate_limits": { 00:15:03.470 "rw_ios_per_sec": 0, 00:15:03.470 "rw_mbytes_per_sec": 0, 00:15:03.470 "r_mbytes_per_sec": 0, 00:15:03.470 "w_mbytes_per_sec": 0 00:15:03.470 }, 00:15:03.470 "claimed": true, 00:15:03.470 "claim_type": "exclusive_write", 00:15:03.470 "zoned": false, 00:15:03.470 "supported_io_types": { 00:15:03.470 "read": true, 00:15:03.470 "write": true, 00:15:03.470 "unmap": true, 00:15:03.470 "flush": true, 00:15:03.470 "reset": true, 00:15:03.470 "nvme_admin": false, 00:15:03.470 "nvme_io": false, 00:15:03.470 "nvme_io_md": false, 00:15:03.470 "write_zeroes": true, 00:15:03.470 "zcopy": true, 00:15:03.470 "get_zone_info": false, 00:15:03.470 "zone_management": false, 00:15:03.470 "zone_append": false, 00:15:03.470 "compare": false, 00:15:03.470 "compare_and_write": false, 00:15:03.470 "abort": true, 00:15:03.470 "seek_hole": false, 00:15:03.470 "seek_data": false, 00:15:03.470 "copy": true, 00:15:03.470 "nvme_iov_md": false 00:15:03.470 }, 00:15:03.470 "memory_domains": [ 00:15:03.470 { 00:15:03.470 "dma_device_id": "system", 00:15:03.470 "dma_device_type": 1 00:15:03.470 }, 00:15:03.470 { 00:15:03.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.470 "dma_device_type": 2 00:15:03.470 } 00:15:03.470 ], 00:15:03.470 "driver_specific": {} 00:15:03.470 } 00:15:03.470 ] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.470 "name": "Existed_Raid", 00:15:03.470 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:03.470 "strip_size_kb": 64, 00:15:03.470 "state": "configuring", 00:15:03.470 "raid_level": "raid5f", 00:15:03.470 "superblock": true, 00:15:03.470 "num_base_bdevs": 4, 00:15:03.470 "num_base_bdevs_discovered": 3, 00:15:03.470 "num_base_bdevs_operational": 4, 00:15:03.470 "base_bdevs_list": [ 00:15:03.470 { 00:15:03.470 "name": "BaseBdev1", 00:15:03.470 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:03.470 "is_configured": true, 00:15:03.470 "data_offset": 2048, 00:15:03.470 "data_size": 63488 00:15:03.470 }, 00:15:03.470 { 00:15:03.470 "name": "BaseBdev2", 00:15:03.470 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:03.470 "is_configured": true, 00:15:03.470 "data_offset": 2048, 00:15:03.470 "data_size": 63488 00:15:03.470 }, 00:15:03.470 { 00:15:03.470 "name": "BaseBdev3", 00:15:03.470 "uuid": "bc66fec0-fc77-47e1-85dd-66dc8233ded4", 00:15:03.470 "is_configured": true, 00:15:03.470 "data_offset": 2048, 00:15:03.470 "data_size": 63488 00:15:03.470 }, 00:15:03.470 { 00:15:03.470 "name": "BaseBdev4", 00:15:03.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.470 "is_configured": false, 00:15:03.470 "data_offset": 0, 00:15:03.470 "data_size": 0 00:15:03.470 } 00:15:03.470 ] 00:15:03.470 }' 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.470 18:55:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.730 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:03.730 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.730 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.990 [2024-11-28 18:55:33.342106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:03.990 BaseBdev4 00:15:03.990 [2024-11-28 18:55:33.342409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:03.990 [2024-11-28 18:55:33.342456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:03.990 [2024-11-28 18:55:33.342744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:03.990 [2024-11-28 18:55:33.343187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:03.990 [2024-11-28 18:55:33.343201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:03.990 [2024-11-28 18:55:33.343324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.990 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.990 [ 00:15:03.990 { 00:15:03.990 "name": "BaseBdev4", 00:15:03.990 "aliases": [ 00:15:03.990 "da457f7c-a370-42c5-8348-61b4aa03f1de" 00:15:03.990 ], 00:15:03.990 "product_name": "Malloc disk", 00:15:03.990 "block_size": 512, 00:15:03.990 "num_blocks": 65536, 00:15:03.990 "uuid": "da457f7c-a370-42c5-8348-61b4aa03f1de", 00:15:03.990 "assigned_rate_limits": { 00:15:03.990 "rw_ios_per_sec": 0, 00:15:03.990 "rw_mbytes_per_sec": 0, 00:15:03.990 "r_mbytes_per_sec": 0, 00:15:03.990 "w_mbytes_per_sec": 0 00:15:03.990 }, 00:15:03.990 "claimed": true, 00:15:03.990 "claim_type": "exclusive_write", 00:15:03.990 "zoned": false, 00:15:03.990 "supported_io_types": { 00:15:03.990 "read": true, 00:15:03.990 "write": true, 00:15:03.990 "unmap": true, 00:15:03.991 "flush": true, 00:15:03.991 "reset": true, 00:15:03.991 "nvme_admin": false, 00:15:03.991 "nvme_io": false, 00:15:03.991 "nvme_io_md": false, 00:15:03.991 "write_zeroes": true, 00:15:03.991 "zcopy": true, 00:15:03.991 "get_zone_info": false, 00:15:03.991 "zone_management": false, 00:15:03.991 "zone_append": false, 00:15:03.991 "compare": false, 00:15:03.991 "compare_and_write": false, 00:15:03.991 "abort": true, 00:15:03.991 "seek_hole": false, 00:15:03.991 "seek_data": false, 00:15:03.991 "copy": true, 00:15:03.991 "nvme_iov_md": false 00:15:03.991 }, 00:15:03.991 "memory_domains": [ 00:15:03.991 { 00:15:03.991 "dma_device_id": "system", 00:15:03.991 "dma_device_type": 1 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.991 "dma_device_type": 2 00:15:03.991 } 00:15:03.991 ], 00:15:03.991 "driver_specific": {} 00:15:03.991 } 00:15:03.991 ] 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.991 "name": "Existed_Raid", 00:15:03.991 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:03.991 "strip_size_kb": 64, 00:15:03.991 "state": "online", 00:15:03.991 "raid_level": "raid5f", 00:15:03.991 "superblock": true, 00:15:03.991 "num_base_bdevs": 4, 00:15:03.991 "num_base_bdevs_discovered": 4, 00:15:03.991 "num_base_bdevs_operational": 4, 00:15:03.991 "base_bdevs_list": [ 00:15:03.991 { 00:15:03.991 "name": "BaseBdev1", 00:15:03.991 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "name": "BaseBdev2", 00:15:03.991 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "name": "BaseBdev3", 00:15:03.991 "uuid": "bc66fec0-fc77-47e1-85dd-66dc8233ded4", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "name": "BaseBdev4", 00:15:03.991 "uuid": "da457f7c-a370-42c5-8348-61b4aa03f1de", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 } 00:15:03.991 ] 00:15:03.991 }' 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.991 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.251 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.251 [2024-11-28 18:55:33.850487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.511 "name": "Existed_Raid", 00:15:04.511 "aliases": [ 00:15:04.511 "8fda41e7-6812-4626-a567-22468de506fc" 00:15:04.511 ], 00:15:04.511 "product_name": "Raid Volume", 00:15:04.511 "block_size": 512, 00:15:04.511 "num_blocks": 190464, 00:15:04.511 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:04.511 "assigned_rate_limits": { 00:15:04.511 "rw_ios_per_sec": 0, 00:15:04.511 "rw_mbytes_per_sec": 0, 00:15:04.511 "r_mbytes_per_sec": 0, 00:15:04.511 "w_mbytes_per_sec": 0 00:15:04.511 }, 00:15:04.511 "claimed": false, 00:15:04.511 "zoned": false, 00:15:04.511 "supported_io_types": { 00:15:04.511 "read": true, 00:15:04.511 "write": true, 00:15:04.511 "unmap": false, 00:15:04.511 "flush": false, 00:15:04.511 "reset": true, 00:15:04.511 "nvme_admin": false, 00:15:04.511 "nvme_io": false, 00:15:04.511 "nvme_io_md": false, 00:15:04.511 "write_zeroes": true, 00:15:04.511 "zcopy": false, 00:15:04.511 "get_zone_info": false, 00:15:04.511 "zone_management": false, 00:15:04.511 "zone_append": false, 00:15:04.511 "compare": false, 00:15:04.511 "compare_and_write": false, 00:15:04.511 "abort": false, 00:15:04.511 "seek_hole": false, 00:15:04.511 "seek_data": false, 00:15:04.511 "copy": false, 00:15:04.511 "nvme_iov_md": false 00:15:04.511 }, 00:15:04.511 "driver_specific": { 00:15:04.511 "raid": { 00:15:04.511 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:04.511 "strip_size_kb": 64, 00:15:04.511 "state": "online", 00:15:04.511 "raid_level": "raid5f", 00:15:04.511 "superblock": true, 00:15:04.511 "num_base_bdevs": 4, 00:15:04.511 "num_base_bdevs_discovered": 4, 00:15:04.511 "num_base_bdevs_operational": 4, 00:15:04.511 "base_bdevs_list": [ 00:15:04.511 { 00:15:04.511 "name": "BaseBdev1", 00:15:04.511 "uuid": "9e34cebf-0e4f-4221-983a-1ea300bb498e", 00:15:04.511 "is_configured": true, 00:15:04.511 "data_offset": 2048, 00:15:04.511 "data_size": 63488 00:15:04.511 }, 00:15:04.511 { 00:15:04.511 "name": "BaseBdev2", 00:15:04.511 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:04.511 "is_configured": true, 00:15:04.511 "data_offset": 2048, 00:15:04.511 "data_size": 63488 00:15:04.511 }, 00:15:04.511 { 00:15:04.511 "name": "BaseBdev3", 00:15:04.511 "uuid": "bc66fec0-fc77-47e1-85dd-66dc8233ded4", 00:15:04.511 "is_configured": true, 00:15:04.511 "data_offset": 2048, 00:15:04.511 "data_size": 63488 00:15:04.511 }, 00:15:04.511 { 00:15:04.511 "name": "BaseBdev4", 00:15:04.511 "uuid": "da457f7c-a370-42c5-8348-61b4aa03f1de", 00:15:04.511 "is_configured": true, 00:15:04.511 "data_offset": 2048, 00:15:04.511 "data_size": 63488 00:15:04.511 } 00:15:04.511 ] 00:15:04.511 } 00:15:04.511 } 00:15:04.511 }' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:04.511 BaseBdev2 00:15:04.511 BaseBdev3 00:15:04.511 BaseBdev4' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.511 18:55:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.511 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.772 [2024-11-28 18:55:34.158390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.772 "name": "Existed_Raid", 00:15:04.772 "uuid": "8fda41e7-6812-4626-a567-22468de506fc", 00:15:04.772 "strip_size_kb": 64, 00:15:04.772 "state": "online", 00:15:04.772 "raid_level": "raid5f", 00:15:04.772 "superblock": true, 00:15:04.772 "num_base_bdevs": 4, 00:15:04.772 "num_base_bdevs_discovered": 3, 00:15:04.772 "num_base_bdevs_operational": 3, 00:15:04.772 "base_bdevs_list": [ 00:15:04.772 { 00:15:04.772 "name": null, 00:15:04.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.772 "is_configured": false, 00:15:04.772 "data_offset": 0, 00:15:04.772 "data_size": 63488 00:15:04.772 }, 00:15:04.772 { 00:15:04.772 "name": "BaseBdev2", 00:15:04.772 "uuid": "d2f94ab0-ab48-4648-958e-57a112a42132", 00:15:04.772 "is_configured": true, 00:15:04.772 "data_offset": 2048, 00:15:04.772 "data_size": 63488 00:15:04.772 }, 00:15:04.772 { 00:15:04.772 "name": "BaseBdev3", 00:15:04.772 "uuid": "bc66fec0-fc77-47e1-85dd-66dc8233ded4", 00:15:04.772 "is_configured": true, 00:15:04.772 "data_offset": 2048, 00:15:04.772 "data_size": 63488 00:15:04.772 }, 00:15:04.772 { 00:15:04.772 "name": "BaseBdev4", 00:15:04.772 "uuid": "da457f7c-a370-42c5-8348-61b4aa03f1de", 00:15:04.772 "is_configured": true, 00:15:04.772 "data_offset": 2048, 00:15:04.772 "data_size": 63488 00:15:04.772 } 00:15:04.772 ] 00:15:04.772 }' 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.772 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.032 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.292 [2024-11-28 18:55:34.681740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.292 [2024-11-28 18:55:34.681889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.292 [2024-11-28 18:55:34.693108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.292 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.293 [2024-11-28 18:55:34.753156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.293 [2024-11-28 18:55:34.824606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:05.293 [2024-11-28 18:55:34.824721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 BaseBdev2 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 [ 00:15:05.555 { 00:15:05.555 "name": "BaseBdev2", 00:15:05.555 "aliases": [ 00:15:05.555 "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01" 00:15:05.555 ], 00:15:05.555 "product_name": "Malloc disk", 00:15:05.555 "block_size": 512, 00:15:05.555 "num_blocks": 65536, 00:15:05.555 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:05.555 "assigned_rate_limits": { 00:15:05.555 "rw_ios_per_sec": 0, 00:15:05.555 "rw_mbytes_per_sec": 0, 00:15:05.555 "r_mbytes_per_sec": 0, 00:15:05.555 "w_mbytes_per_sec": 0 00:15:05.555 }, 00:15:05.555 "claimed": false, 00:15:05.555 "zoned": false, 00:15:05.555 "supported_io_types": { 00:15:05.555 "read": true, 00:15:05.555 "write": true, 00:15:05.555 "unmap": true, 00:15:05.555 "flush": true, 00:15:05.555 "reset": true, 00:15:05.555 "nvme_admin": false, 00:15:05.555 "nvme_io": false, 00:15:05.555 "nvme_io_md": false, 00:15:05.555 "write_zeroes": true, 00:15:05.555 "zcopy": true, 00:15:05.555 "get_zone_info": false, 00:15:05.555 "zone_management": false, 00:15:05.555 "zone_append": false, 00:15:05.555 "compare": false, 00:15:05.555 "compare_and_write": false, 00:15:05.555 "abort": true, 00:15:05.555 "seek_hole": false, 00:15:05.555 "seek_data": false, 00:15:05.555 "copy": true, 00:15:05.555 "nvme_iov_md": false 00:15:05.555 }, 00:15:05.555 "memory_domains": [ 00:15:05.555 { 00:15:05.555 "dma_device_id": "system", 00:15:05.555 "dma_device_type": 1 00:15:05.555 }, 00:15:05.555 { 00:15:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.555 "dma_device_type": 2 00:15:05.555 } 00:15:05.555 ], 00:15:05.555 "driver_specific": {} 00:15:05.555 } 00:15:05.555 ] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 BaseBdev3 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 [ 00:15:05.555 { 00:15:05.555 "name": "BaseBdev3", 00:15:05.555 "aliases": [ 00:15:05.555 "0f58cfe5-130e-404f-8dc0-3e596857ac49" 00:15:05.555 ], 00:15:05.555 "product_name": "Malloc disk", 00:15:05.555 "block_size": 512, 00:15:05.555 "num_blocks": 65536, 00:15:05.555 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:05.555 "assigned_rate_limits": { 00:15:05.555 "rw_ios_per_sec": 0, 00:15:05.555 "rw_mbytes_per_sec": 0, 00:15:05.555 "r_mbytes_per_sec": 0, 00:15:05.555 "w_mbytes_per_sec": 0 00:15:05.555 }, 00:15:05.555 "claimed": false, 00:15:05.555 "zoned": false, 00:15:05.555 "supported_io_types": { 00:15:05.555 "read": true, 00:15:05.555 "write": true, 00:15:05.555 "unmap": true, 00:15:05.555 "flush": true, 00:15:05.555 "reset": true, 00:15:05.555 "nvme_admin": false, 00:15:05.555 "nvme_io": false, 00:15:05.555 "nvme_io_md": false, 00:15:05.555 "write_zeroes": true, 00:15:05.555 "zcopy": true, 00:15:05.555 "get_zone_info": false, 00:15:05.555 "zone_management": false, 00:15:05.555 "zone_append": false, 00:15:05.555 "compare": false, 00:15:05.555 "compare_and_write": false, 00:15:05.555 "abort": true, 00:15:05.555 "seek_hole": false, 00:15:05.555 "seek_data": false, 00:15:05.555 "copy": true, 00:15:05.555 "nvme_iov_md": false 00:15:05.555 }, 00:15:05.555 "memory_domains": [ 00:15:05.555 { 00:15:05.555 "dma_device_id": "system", 00:15:05.555 "dma_device_type": 1 00:15:05.555 }, 00:15:05.555 { 00:15:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.555 "dma_device_type": 2 00:15:05.555 } 00:15:05.555 ], 00:15:05.555 "driver_specific": {} 00:15:05.555 } 00:15:05.555 ] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.555 18:55:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.555 BaseBdev4 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.555 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 [ 00:15:05.556 { 00:15:05.556 "name": "BaseBdev4", 00:15:05.556 "aliases": [ 00:15:05.556 "a48354e4-3442-458a-a98e-f4cd39da1168" 00:15:05.556 ], 00:15:05.556 "product_name": "Malloc disk", 00:15:05.556 "block_size": 512, 00:15:05.556 "num_blocks": 65536, 00:15:05.556 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:05.556 "assigned_rate_limits": { 00:15:05.556 "rw_ios_per_sec": 0, 00:15:05.556 "rw_mbytes_per_sec": 0, 00:15:05.556 "r_mbytes_per_sec": 0, 00:15:05.556 "w_mbytes_per_sec": 0 00:15:05.556 }, 00:15:05.556 "claimed": false, 00:15:05.556 "zoned": false, 00:15:05.556 "supported_io_types": { 00:15:05.556 "read": true, 00:15:05.556 "write": true, 00:15:05.556 "unmap": true, 00:15:05.556 "flush": true, 00:15:05.556 "reset": true, 00:15:05.556 "nvme_admin": false, 00:15:05.556 "nvme_io": false, 00:15:05.556 "nvme_io_md": false, 00:15:05.556 "write_zeroes": true, 00:15:05.556 "zcopy": true, 00:15:05.556 "get_zone_info": false, 00:15:05.556 "zone_management": false, 00:15:05.556 "zone_append": false, 00:15:05.556 "compare": false, 00:15:05.556 "compare_and_write": false, 00:15:05.556 "abort": true, 00:15:05.556 "seek_hole": false, 00:15:05.556 "seek_data": false, 00:15:05.556 "copy": true, 00:15:05.556 "nvme_iov_md": false 00:15:05.556 }, 00:15:05.556 "memory_domains": [ 00:15:05.556 { 00:15:05.556 "dma_device_id": "system", 00:15:05.556 "dma_device_type": 1 00:15:05.556 }, 00:15:05.556 { 00:15:05.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.556 "dma_device_type": 2 00:15:05.556 } 00:15:05.556 ], 00:15:05.556 "driver_specific": {} 00:15:05.556 } 00:15:05.556 ] 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 [2024-11-28 18:55:35.055671] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.556 [2024-11-28 18:55:35.055759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.556 [2024-11-28 18:55:35.055805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.556 [2024-11-28 18:55:35.057636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.556 [2024-11-28 18:55:35.057728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.556 "name": "Existed_Raid", 00:15:05.556 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:05.556 "strip_size_kb": 64, 00:15:05.556 "state": "configuring", 00:15:05.556 "raid_level": "raid5f", 00:15:05.556 "superblock": true, 00:15:05.556 "num_base_bdevs": 4, 00:15:05.556 "num_base_bdevs_discovered": 3, 00:15:05.556 "num_base_bdevs_operational": 4, 00:15:05.556 "base_bdevs_list": [ 00:15:05.556 { 00:15:05.556 "name": "BaseBdev1", 00:15:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.556 "is_configured": false, 00:15:05.556 "data_offset": 0, 00:15:05.556 "data_size": 0 00:15:05.556 }, 00:15:05.556 { 00:15:05.556 "name": "BaseBdev2", 00:15:05.556 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:05.556 "is_configured": true, 00:15:05.556 "data_offset": 2048, 00:15:05.556 "data_size": 63488 00:15:05.556 }, 00:15:05.556 { 00:15:05.556 "name": "BaseBdev3", 00:15:05.556 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:05.556 "is_configured": true, 00:15:05.556 "data_offset": 2048, 00:15:05.556 "data_size": 63488 00:15:05.556 }, 00:15:05.556 { 00:15:05.556 "name": "BaseBdev4", 00:15:05.556 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:05.556 "is_configured": true, 00:15:05.556 "data_offset": 2048, 00:15:05.556 "data_size": 63488 00:15:05.556 } 00:15:05.556 ] 00:15:05.556 }' 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.556 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.124 [2024-11-28 18:55:35.479749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.124 "name": "Existed_Raid", 00:15:06.124 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:06.124 "strip_size_kb": 64, 00:15:06.124 "state": "configuring", 00:15:06.124 "raid_level": "raid5f", 00:15:06.124 "superblock": true, 00:15:06.124 "num_base_bdevs": 4, 00:15:06.124 "num_base_bdevs_discovered": 2, 00:15:06.124 "num_base_bdevs_operational": 4, 00:15:06.124 "base_bdevs_list": [ 00:15:06.124 { 00:15:06.124 "name": "BaseBdev1", 00:15:06.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.124 "is_configured": false, 00:15:06.124 "data_offset": 0, 00:15:06.124 "data_size": 0 00:15:06.124 }, 00:15:06.124 { 00:15:06.124 "name": null, 00:15:06.124 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:06.124 "is_configured": false, 00:15:06.124 "data_offset": 0, 00:15:06.124 "data_size": 63488 00:15:06.124 }, 00:15:06.124 { 00:15:06.124 "name": "BaseBdev3", 00:15:06.124 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:06.124 "is_configured": true, 00:15:06.124 "data_offset": 2048, 00:15:06.124 "data_size": 63488 00:15:06.124 }, 00:15:06.124 { 00:15:06.124 "name": "BaseBdev4", 00:15:06.124 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:06.124 "is_configured": true, 00:15:06.124 "data_offset": 2048, 00:15:06.124 "data_size": 63488 00:15:06.124 } 00:15:06.124 ] 00:15:06.124 }' 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.124 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.384 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.384 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.384 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.384 18:55:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:06.384 18:55:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.645 BaseBdev1 00:15:06.645 [2024-11-28 18:55:36.023044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.645 [ 00:15:06.645 { 00:15:06.645 "name": "BaseBdev1", 00:15:06.645 "aliases": [ 00:15:06.645 "c6054aa1-8a14-436f-8410-ff0dbce61abe" 00:15:06.645 ], 00:15:06.645 "product_name": "Malloc disk", 00:15:06.645 "block_size": 512, 00:15:06.645 "num_blocks": 65536, 00:15:06.645 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:06.645 "assigned_rate_limits": { 00:15:06.645 "rw_ios_per_sec": 0, 00:15:06.645 "rw_mbytes_per_sec": 0, 00:15:06.645 "r_mbytes_per_sec": 0, 00:15:06.645 "w_mbytes_per_sec": 0 00:15:06.645 }, 00:15:06.645 "claimed": true, 00:15:06.645 "claim_type": "exclusive_write", 00:15:06.645 "zoned": false, 00:15:06.645 "supported_io_types": { 00:15:06.645 "read": true, 00:15:06.645 "write": true, 00:15:06.645 "unmap": true, 00:15:06.645 "flush": true, 00:15:06.645 "reset": true, 00:15:06.645 "nvme_admin": false, 00:15:06.645 "nvme_io": false, 00:15:06.645 "nvme_io_md": false, 00:15:06.645 "write_zeroes": true, 00:15:06.645 "zcopy": true, 00:15:06.645 "get_zone_info": false, 00:15:06.645 "zone_management": false, 00:15:06.645 "zone_append": false, 00:15:06.645 "compare": false, 00:15:06.645 "compare_and_write": false, 00:15:06.645 "abort": true, 00:15:06.645 "seek_hole": false, 00:15:06.645 "seek_data": false, 00:15:06.645 "copy": true, 00:15:06.645 "nvme_iov_md": false 00:15:06.645 }, 00:15:06.645 "memory_domains": [ 00:15:06.645 { 00:15:06.645 "dma_device_id": "system", 00:15:06.645 "dma_device_type": 1 00:15:06.645 }, 00:15:06.645 { 00:15:06.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.645 "dma_device_type": 2 00:15:06.645 } 00:15:06.645 ], 00:15:06.645 "driver_specific": {} 00:15:06.645 } 00:15:06.645 ] 00:15:06.645 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.646 "name": "Existed_Raid", 00:15:06.646 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:06.646 "strip_size_kb": 64, 00:15:06.646 "state": "configuring", 00:15:06.646 "raid_level": "raid5f", 00:15:06.646 "superblock": true, 00:15:06.646 "num_base_bdevs": 4, 00:15:06.646 "num_base_bdevs_discovered": 3, 00:15:06.646 "num_base_bdevs_operational": 4, 00:15:06.646 "base_bdevs_list": [ 00:15:06.646 { 00:15:06.646 "name": "BaseBdev1", 00:15:06.646 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:06.646 "is_configured": true, 00:15:06.646 "data_offset": 2048, 00:15:06.646 "data_size": 63488 00:15:06.646 }, 00:15:06.646 { 00:15:06.646 "name": null, 00:15:06.646 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:06.646 "is_configured": false, 00:15:06.646 "data_offset": 0, 00:15:06.646 "data_size": 63488 00:15:06.646 }, 00:15:06.646 { 00:15:06.646 "name": "BaseBdev3", 00:15:06.646 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:06.646 "is_configured": true, 00:15:06.646 "data_offset": 2048, 00:15:06.646 "data_size": 63488 00:15:06.646 }, 00:15:06.646 { 00:15:06.646 "name": "BaseBdev4", 00:15:06.646 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:06.646 "is_configured": true, 00:15:06.646 "data_offset": 2048, 00:15:06.646 "data_size": 63488 00:15:06.646 } 00:15:06.646 ] 00:15:06.646 }' 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.646 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.906 [2024-11-28 18:55:36.499209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.906 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.166 "name": "Existed_Raid", 00:15:07.166 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:07.166 "strip_size_kb": 64, 00:15:07.166 "state": "configuring", 00:15:07.166 "raid_level": "raid5f", 00:15:07.166 "superblock": true, 00:15:07.166 "num_base_bdevs": 4, 00:15:07.166 "num_base_bdevs_discovered": 2, 00:15:07.166 "num_base_bdevs_operational": 4, 00:15:07.166 "base_bdevs_list": [ 00:15:07.166 { 00:15:07.166 "name": "BaseBdev1", 00:15:07.166 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:07.166 "is_configured": true, 00:15:07.166 "data_offset": 2048, 00:15:07.166 "data_size": 63488 00:15:07.166 }, 00:15:07.166 { 00:15:07.166 "name": null, 00:15:07.166 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:07.166 "is_configured": false, 00:15:07.166 "data_offset": 0, 00:15:07.166 "data_size": 63488 00:15:07.166 }, 00:15:07.166 { 00:15:07.166 "name": null, 00:15:07.166 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:07.166 "is_configured": false, 00:15:07.166 "data_offset": 0, 00:15:07.166 "data_size": 63488 00:15:07.166 }, 00:15:07.166 { 00:15:07.166 "name": "BaseBdev4", 00:15:07.166 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:07.166 "is_configured": true, 00:15:07.166 "data_offset": 2048, 00:15:07.166 "data_size": 63488 00:15:07.166 } 00:15:07.166 ] 00:15:07.166 }' 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.166 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.427 [2024-11-28 18:55:36.991373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.427 18:55:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.427 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.427 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.427 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.427 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.427 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.688 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.688 "name": "Existed_Raid", 00:15:07.688 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:07.688 "strip_size_kb": 64, 00:15:07.688 "state": "configuring", 00:15:07.688 "raid_level": "raid5f", 00:15:07.688 "superblock": true, 00:15:07.688 "num_base_bdevs": 4, 00:15:07.688 "num_base_bdevs_discovered": 3, 00:15:07.688 "num_base_bdevs_operational": 4, 00:15:07.688 "base_bdevs_list": [ 00:15:07.688 { 00:15:07.688 "name": "BaseBdev1", 00:15:07.688 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:07.688 "is_configured": true, 00:15:07.688 "data_offset": 2048, 00:15:07.688 "data_size": 63488 00:15:07.688 }, 00:15:07.688 { 00:15:07.688 "name": null, 00:15:07.688 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:07.688 "is_configured": false, 00:15:07.688 "data_offset": 0, 00:15:07.688 "data_size": 63488 00:15:07.688 }, 00:15:07.688 { 00:15:07.688 "name": "BaseBdev3", 00:15:07.688 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:07.688 "is_configured": true, 00:15:07.688 "data_offset": 2048, 00:15:07.688 "data_size": 63488 00:15:07.688 }, 00:15:07.688 { 00:15:07.688 "name": "BaseBdev4", 00:15:07.688 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:07.688 "is_configured": true, 00:15:07.688 "data_offset": 2048, 00:15:07.688 "data_size": 63488 00:15:07.688 } 00:15:07.688 ] 00:15:07.688 }' 00:15:07.688 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.688 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.949 [2024-11-28 18:55:37.487517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.949 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.949 "name": "Existed_Raid", 00:15:07.949 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:07.949 "strip_size_kb": 64, 00:15:07.950 "state": "configuring", 00:15:07.950 "raid_level": "raid5f", 00:15:07.950 "superblock": true, 00:15:07.950 "num_base_bdevs": 4, 00:15:07.950 "num_base_bdevs_discovered": 2, 00:15:07.950 "num_base_bdevs_operational": 4, 00:15:07.950 "base_bdevs_list": [ 00:15:07.950 { 00:15:07.950 "name": null, 00:15:07.950 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:07.950 "is_configured": false, 00:15:07.950 "data_offset": 0, 00:15:07.950 "data_size": 63488 00:15:07.950 }, 00:15:07.950 { 00:15:07.950 "name": null, 00:15:07.950 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:07.950 "is_configured": false, 00:15:07.950 "data_offset": 0, 00:15:07.950 "data_size": 63488 00:15:07.950 }, 00:15:07.950 { 00:15:07.950 "name": "BaseBdev3", 00:15:07.950 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:07.950 "is_configured": true, 00:15:07.950 "data_offset": 2048, 00:15:07.950 "data_size": 63488 00:15:07.950 }, 00:15:07.950 { 00:15:07.950 "name": "BaseBdev4", 00:15:07.950 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:07.950 "is_configured": true, 00:15:07.950 "data_offset": 2048, 00:15:07.950 "data_size": 63488 00:15:07.950 } 00:15:07.950 ] 00:15:07.950 }' 00:15:07.950 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.950 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.520 [2024-11-28 18:55:37.942001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.520 "name": "Existed_Raid", 00:15:08.520 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:08.520 "strip_size_kb": 64, 00:15:08.520 "state": "configuring", 00:15:08.520 "raid_level": "raid5f", 00:15:08.520 "superblock": true, 00:15:08.520 "num_base_bdevs": 4, 00:15:08.520 "num_base_bdevs_discovered": 3, 00:15:08.520 "num_base_bdevs_operational": 4, 00:15:08.520 "base_bdevs_list": [ 00:15:08.520 { 00:15:08.520 "name": null, 00:15:08.520 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:08.520 "is_configured": false, 00:15:08.520 "data_offset": 0, 00:15:08.520 "data_size": 63488 00:15:08.520 }, 00:15:08.520 { 00:15:08.520 "name": "BaseBdev2", 00:15:08.520 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:08.520 "is_configured": true, 00:15:08.520 "data_offset": 2048, 00:15:08.520 "data_size": 63488 00:15:08.520 }, 00:15:08.520 { 00:15:08.520 "name": "BaseBdev3", 00:15:08.520 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:08.520 "is_configured": true, 00:15:08.520 "data_offset": 2048, 00:15:08.520 "data_size": 63488 00:15:08.520 }, 00:15:08.520 { 00:15:08.520 "name": "BaseBdev4", 00:15:08.520 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:08.520 "is_configured": true, 00:15:08.520 "data_offset": 2048, 00:15:08.520 "data_size": 63488 00:15:08.520 } 00:15:08.520 ] 00:15:08.520 }' 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.520 18:55:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:08.780 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6054aa1-8a14-436f-8410-ff0dbce61abe 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 NewBaseBdev 00:15:09.041 [2024-11-28 18:55:38.401110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:09.041 [2024-11-28 18:55:38.401288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:09.041 [2024-11-28 18:55:38.401304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.041 [2024-11-28 18:55:38.401575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:15:09.041 [2024-11-28 18:55:38.402056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:09.041 [2024-11-28 18:55:38.402074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:09.041 [2024-11-28 18:55:38.402178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 [ 00:15:09.041 { 00:15:09.041 "name": "NewBaseBdev", 00:15:09.041 "aliases": [ 00:15:09.041 "c6054aa1-8a14-436f-8410-ff0dbce61abe" 00:15:09.041 ], 00:15:09.041 "product_name": "Malloc disk", 00:15:09.041 "block_size": 512, 00:15:09.041 "num_blocks": 65536, 00:15:09.041 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:09.041 "assigned_rate_limits": { 00:15:09.041 "rw_ios_per_sec": 0, 00:15:09.041 "rw_mbytes_per_sec": 0, 00:15:09.041 "r_mbytes_per_sec": 0, 00:15:09.041 "w_mbytes_per_sec": 0 00:15:09.041 }, 00:15:09.041 "claimed": true, 00:15:09.041 "claim_type": "exclusive_write", 00:15:09.041 "zoned": false, 00:15:09.041 "supported_io_types": { 00:15:09.041 "read": true, 00:15:09.041 "write": true, 00:15:09.041 "unmap": true, 00:15:09.041 "flush": true, 00:15:09.041 "reset": true, 00:15:09.041 "nvme_admin": false, 00:15:09.041 "nvme_io": false, 00:15:09.041 "nvme_io_md": false, 00:15:09.041 "write_zeroes": true, 00:15:09.041 "zcopy": true, 00:15:09.041 "get_zone_info": false, 00:15:09.041 "zone_management": false, 00:15:09.041 "zone_append": false, 00:15:09.041 "compare": false, 00:15:09.041 "compare_and_write": false, 00:15:09.041 "abort": true, 00:15:09.041 "seek_hole": false, 00:15:09.041 "seek_data": false, 00:15:09.041 "copy": true, 00:15:09.041 "nvme_iov_md": false 00:15:09.041 }, 00:15:09.041 "memory_domains": [ 00:15:09.041 { 00:15:09.041 "dma_device_id": "system", 00:15:09.041 "dma_device_type": 1 00:15:09.041 }, 00:15:09.041 { 00:15:09.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.041 "dma_device_type": 2 00:15:09.041 } 00:15:09.041 ], 00:15:09.041 "driver_specific": {} 00:15:09.041 } 00:15:09.041 ] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.041 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.041 "name": "Existed_Raid", 00:15:09.041 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:09.041 "strip_size_kb": 64, 00:15:09.041 "state": "online", 00:15:09.041 "raid_level": "raid5f", 00:15:09.041 "superblock": true, 00:15:09.041 "num_base_bdevs": 4, 00:15:09.041 "num_base_bdevs_discovered": 4, 00:15:09.041 "num_base_bdevs_operational": 4, 00:15:09.041 "base_bdevs_list": [ 00:15:09.041 { 00:15:09.041 "name": "NewBaseBdev", 00:15:09.042 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:09.042 "is_configured": true, 00:15:09.042 "data_offset": 2048, 00:15:09.042 "data_size": 63488 00:15:09.042 }, 00:15:09.042 { 00:15:09.042 "name": "BaseBdev2", 00:15:09.042 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:09.042 "is_configured": true, 00:15:09.042 "data_offset": 2048, 00:15:09.042 "data_size": 63488 00:15:09.042 }, 00:15:09.042 { 00:15:09.042 "name": "BaseBdev3", 00:15:09.042 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:09.042 "is_configured": true, 00:15:09.042 "data_offset": 2048, 00:15:09.042 "data_size": 63488 00:15:09.042 }, 00:15:09.042 { 00:15:09.042 "name": "BaseBdev4", 00:15:09.042 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:09.042 "is_configured": true, 00:15:09.042 "data_offset": 2048, 00:15:09.042 "data_size": 63488 00:15:09.042 } 00:15:09.042 ] 00:15:09.042 }' 00:15:09.042 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.042 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.302 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.302 [2024-11-28 18:55:38.889474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.561 18:55:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.562 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.562 "name": "Existed_Raid", 00:15:09.562 "aliases": [ 00:15:09.562 "ab171b4a-d98d-47d2-9d90-2849c3998ace" 00:15:09.562 ], 00:15:09.562 "product_name": "Raid Volume", 00:15:09.562 "block_size": 512, 00:15:09.562 "num_blocks": 190464, 00:15:09.562 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:09.562 "assigned_rate_limits": { 00:15:09.562 "rw_ios_per_sec": 0, 00:15:09.562 "rw_mbytes_per_sec": 0, 00:15:09.562 "r_mbytes_per_sec": 0, 00:15:09.562 "w_mbytes_per_sec": 0 00:15:09.562 }, 00:15:09.562 "claimed": false, 00:15:09.562 "zoned": false, 00:15:09.562 "supported_io_types": { 00:15:09.562 "read": true, 00:15:09.562 "write": true, 00:15:09.562 "unmap": false, 00:15:09.562 "flush": false, 00:15:09.562 "reset": true, 00:15:09.562 "nvme_admin": false, 00:15:09.562 "nvme_io": false, 00:15:09.562 "nvme_io_md": false, 00:15:09.562 "write_zeroes": true, 00:15:09.562 "zcopy": false, 00:15:09.562 "get_zone_info": false, 00:15:09.562 "zone_management": false, 00:15:09.562 "zone_append": false, 00:15:09.562 "compare": false, 00:15:09.562 "compare_and_write": false, 00:15:09.562 "abort": false, 00:15:09.562 "seek_hole": false, 00:15:09.562 "seek_data": false, 00:15:09.562 "copy": false, 00:15:09.562 "nvme_iov_md": false 00:15:09.562 }, 00:15:09.562 "driver_specific": { 00:15:09.562 "raid": { 00:15:09.562 "uuid": "ab171b4a-d98d-47d2-9d90-2849c3998ace", 00:15:09.562 "strip_size_kb": 64, 00:15:09.562 "state": "online", 00:15:09.562 "raid_level": "raid5f", 00:15:09.562 "superblock": true, 00:15:09.562 "num_base_bdevs": 4, 00:15:09.562 "num_base_bdevs_discovered": 4, 00:15:09.562 "num_base_bdevs_operational": 4, 00:15:09.562 "base_bdevs_list": [ 00:15:09.562 { 00:15:09.562 "name": "NewBaseBdev", 00:15:09.562 "uuid": "c6054aa1-8a14-436f-8410-ff0dbce61abe", 00:15:09.562 "is_configured": true, 00:15:09.562 "data_offset": 2048, 00:15:09.562 "data_size": 63488 00:15:09.562 }, 00:15:09.562 { 00:15:09.562 "name": "BaseBdev2", 00:15:09.562 "uuid": "6e6aa62f-6f8e-4dee-a0e9-cd88c11cac01", 00:15:09.562 "is_configured": true, 00:15:09.562 "data_offset": 2048, 00:15:09.562 "data_size": 63488 00:15:09.562 }, 00:15:09.562 { 00:15:09.562 "name": "BaseBdev3", 00:15:09.562 "uuid": "0f58cfe5-130e-404f-8dc0-3e596857ac49", 00:15:09.562 "is_configured": true, 00:15:09.562 "data_offset": 2048, 00:15:09.562 "data_size": 63488 00:15:09.562 }, 00:15:09.562 { 00:15:09.562 "name": "BaseBdev4", 00:15:09.562 "uuid": "a48354e4-3442-458a-a98e-f4cd39da1168", 00:15:09.562 "is_configured": true, 00:15:09.562 "data_offset": 2048, 00:15:09.562 "data_size": 63488 00:15:09.562 } 00:15:09.562 ] 00:15:09.562 } 00:15:09.562 } 00:15:09.562 }' 00:15:09.562 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.562 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:09.562 BaseBdev2 00:15:09.562 BaseBdev3 00:15:09.562 BaseBdev4' 00:15:09.562 18:55:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.562 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.822 [2024-11-28 18:55:39.205365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.822 [2024-11-28 18:55:39.205390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.822 [2024-11-28 18:55:39.205484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.822 [2024-11-28 18:55:39.205728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.822 [2024-11-28 18:55:39.205742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95382 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95382 ']' 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95382 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95382 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95382' 00:15:09.822 killing process with pid 95382 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95382 00:15:09.822 [2024-11-28 18:55:39.256277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.822 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95382 00:15:09.822 [2024-11-28 18:55:39.296221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.083 18:55:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:10.083 00:15:10.083 real 0m9.510s 00:15:10.083 user 0m16.194s 00:15:10.083 sys 0m2.172s 00:15:10.083 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.083 18:55:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 ************************************ 00:15:10.083 END TEST raid5f_state_function_test_sb 00:15:10.083 ************************************ 00:15:10.083 18:55:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:10.083 18:55:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:10.083 18:55:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.083 18:55:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.083 ************************************ 00:15:10.083 START TEST raid5f_superblock_test 00:15:10.083 ************************************ 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96036 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96036 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96036 ']' 00:15:10.083 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.084 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.084 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.084 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.084 18:55:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 [2024-11-28 18:55:39.698103] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:10.344 [2024-11-28 18:55:39.698334] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96036 ] 00:15:10.344 [2024-11-28 18:55:39.834377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:10.344 [2024-11-28 18:55:39.871506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.344 [2024-11-28 18:55:39.898161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.344 [2024-11-28 18:55:39.941184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.344 [2024-11-28 18:55:39.941293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.913 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 malloc1 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 [2024-11-28 18:55:40.538261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.174 [2024-11-28 18:55:40.538374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.174 [2024-11-28 18:55:40.538420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:11.174 [2024-11-28 18:55:40.538464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.174 [2024-11-28 18:55:40.540555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.174 [2024-11-28 18:55:40.540624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.174 pt1 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 malloc2 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 [2024-11-28 18:55:40.570840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.174 [2024-11-28 18:55:40.570886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.174 [2024-11-28 18:55:40.570904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:11.174 [2024-11-28 18:55:40.570912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.174 [2024-11-28 18:55:40.572972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.174 [2024-11-28 18:55:40.573054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.174 pt2 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 malloc3 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 [2024-11-28 18:55:40.599410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:11.174 [2024-11-28 18:55:40.599518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.174 [2024-11-28 18:55:40.599556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:11.174 [2024-11-28 18:55:40.599587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.174 [2024-11-28 18:55:40.601611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.174 [2024-11-28 18:55:40.601694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:11.174 pt3 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 malloc4 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 [2024-11-28 18:55:40.650540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:11.174 [2024-11-28 18:55:40.650741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.174 [2024-11-28 18:55:40.650861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:11.174 [2024-11-28 18:55:40.650915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.174 [2024-11-28 18:55:40.654344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.174 [2024-11-28 18:55:40.654478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:11.174 pt4 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.174 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.174 [2024-11-28 18:55:40.662782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.174 [2024-11-28 18:55:40.664985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.174 [2024-11-28 18:55:40.665096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:11.174 [2024-11-28 18:55:40.665149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:11.174 [2024-11-28 18:55:40.665345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:11.174 [2024-11-28 18:55:40.665359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:11.174 [2024-11-28 18:55:40.665667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:11.174 [2024-11-28 18:55:40.666237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:11.175 [2024-11-28 18:55:40.666259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:11.175 [2024-11-28 18:55:40.666396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.175 "name": "raid_bdev1", 00:15:11.175 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:11.175 "strip_size_kb": 64, 00:15:11.175 "state": "online", 00:15:11.175 "raid_level": "raid5f", 00:15:11.175 "superblock": true, 00:15:11.175 "num_base_bdevs": 4, 00:15:11.175 "num_base_bdevs_discovered": 4, 00:15:11.175 "num_base_bdevs_operational": 4, 00:15:11.175 "base_bdevs_list": [ 00:15:11.175 { 00:15:11.175 "name": "pt1", 00:15:11.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.175 "is_configured": true, 00:15:11.175 "data_offset": 2048, 00:15:11.175 "data_size": 63488 00:15:11.175 }, 00:15:11.175 { 00:15:11.175 "name": "pt2", 00:15:11.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.175 "is_configured": true, 00:15:11.175 "data_offset": 2048, 00:15:11.175 "data_size": 63488 00:15:11.175 }, 00:15:11.175 { 00:15:11.175 "name": "pt3", 00:15:11.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.175 "is_configured": true, 00:15:11.175 "data_offset": 2048, 00:15:11.175 "data_size": 63488 00:15:11.175 }, 00:15:11.175 { 00:15:11.175 "name": "pt4", 00:15:11.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.175 "is_configured": true, 00:15:11.175 "data_offset": 2048, 00:15:11.175 "data_size": 63488 00:15:11.175 } 00:15:11.175 ] 00:15:11.175 }' 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.175 18:55:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.745 [2024-11-28 18:55:41.128658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.745 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.745 "name": "raid_bdev1", 00:15:11.745 "aliases": [ 00:15:11.745 "5868fab7-6bdf-48e7-9442-80f80efaf3a0" 00:15:11.745 ], 00:15:11.745 "product_name": "Raid Volume", 00:15:11.745 "block_size": 512, 00:15:11.745 "num_blocks": 190464, 00:15:11.745 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:11.745 "assigned_rate_limits": { 00:15:11.745 "rw_ios_per_sec": 0, 00:15:11.745 "rw_mbytes_per_sec": 0, 00:15:11.745 "r_mbytes_per_sec": 0, 00:15:11.745 "w_mbytes_per_sec": 0 00:15:11.745 }, 00:15:11.745 "claimed": false, 00:15:11.745 "zoned": false, 00:15:11.745 "supported_io_types": { 00:15:11.745 "read": true, 00:15:11.745 "write": true, 00:15:11.745 "unmap": false, 00:15:11.745 "flush": false, 00:15:11.745 "reset": true, 00:15:11.745 "nvme_admin": false, 00:15:11.745 "nvme_io": false, 00:15:11.745 "nvme_io_md": false, 00:15:11.745 "write_zeroes": true, 00:15:11.745 "zcopy": false, 00:15:11.745 "get_zone_info": false, 00:15:11.745 "zone_management": false, 00:15:11.745 "zone_append": false, 00:15:11.745 "compare": false, 00:15:11.745 "compare_and_write": false, 00:15:11.745 "abort": false, 00:15:11.745 "seek_hole": false, 00:15:11.745 "seek_data": false, 00:15:11.745 "copy": false, 00:15:11.745 "nvme_iov_md": false 00:15:11.745 }, 00:15:11.745 "driver_specific": { 00:15:11.745 "raid": { 00:15:11.745 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:11.745 "strip_size_kb": 64, 00:15:11.745 "state": "online", 00:15:11.745 "raid_level": "raid5f", 00:15:11.745 "superblock": true, 00:15:11.745 "num_base_bdevs": 4, 00:15:11.745 "num_base_bdevs_discovered": 4, 00:15:11.745 "num_base_bdevs_operational": 4, 00:15:11.745 "base_bdevs_list": [ 00:15:11.745 { 00:15:11.745 "name": "pt1", 00:15:11.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.745 "is_configured": true, 00:15:11.745 "data_offset": 2048, 00:15:11.745 "data_size": 63488 00:15:11.745 }, 00:15:11.745 { 00:15:11.745 "name": "pt2", 00:15:11.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.745 "is_configured": true, 00:15:11.745 "data_offset": 2048, 00:15:11.745 "data_size": 63488 00:15:11.745 }, 00:15:11.745 { 00:15:11.745 "name": "pt3", 00:15:11.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.745 "is_configured": true, 00:15:11.745 "data_offset": 2048, 00:15:11.745 "data_size": 63488 00:15:11.745 }, 00:15:11.745 { 00:15:11.745 "name": "pt4", 00:15:11.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.745 "is_configured": true, 00:15:11.746 "data_offset": 2048, 00:15:11.746 "data_size": 63488 00:15:11.746 } 00:15:11.746 ] 00:15:11.746 } 00:15:11.746 } 00:15:11.746 }' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.746 pt2 00:15:11.746 pt3 00:15:11.746 pt4' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.746 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:12.007 [2024-11-28 18:55:41.464728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5868fab7-6bdf-48e7-9442-80f80efaf3a0 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5868fab7-6bdf-48e7-9442-80f80efaf3a0 ']' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 [2024-11-28 18:55:41.512559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.007 [2024-11-28 18:55:41.512580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.007 [2024-11-28 18:55:41.512652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.007 [2024-11-28 18:55:41.512729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.007 [2024-11-28 18:55:41.512741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 [2024-11-28 18:55:41.680661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:12.267 [2024-11-28 18:55:41.682629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:12.267 [2024-11-28 18:55:41.682669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:12.267 [2024-11-28 18:55:41.682696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:12.267 [2024-11-28 18:55:41.682735] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:12.267 [2024-11-28 18:55:41.682789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:12.267 [2024-11-28 18:55:41.682805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:12.267 [2024-11-28 18:55:41.682821] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:12.267 [2024-11-28 18:55:41.682833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.267 [2024-11-28 18:55:41.682842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:12.267 request: 00:15:12.267 { 00:15:12.267 "name": "raid_bdev1", 00:15:12.267 "raid_level": "raid5f", 00:15:12.267 "base_bdevs": [ 00:15:12.267 "malloc1", 00:15:12.267 "malloc2", 00:15:12.267 "malloc3", 00:15:12.267 "malloc4" 00:15:12.267 ], 00:15:12.267 "strip_size_kb": 64, 00:15:12.267 "superblock": false, 00:15:12.267 "method": "bdev_raid_create", 00:15:12.267 "req_id": 1 00:15:12.267 } 00:15:12.267 Got JSON-RPC error response 00:15:12.267 response: 00:15:12.267 { 00:15:12.267 "code": -17, 00:15:12.267 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:12.267 } 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 [2024-11-28 18:55:41.748640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.267 [2024-11-28 18:55:41.748729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.267 [2024-11-28 18:55:41.748760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:12.267 [2024-11-28 18:55:41.748788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.267 [2024-11-28 18:55:41.750894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.267 [2024-11-28 18:55:41.750966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.267 [2024-11-28 18:55:41.751047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:12.267 [2024-11-28 18:55:41.751106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.267 pt1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.267 "name": "raid_bdev1", 00:15:12.267 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:12.267 "strip_size_kb": 64, 00:15:12.267 "state": "configuring", 00:15:12.267 "raid_level": "raid5f", 00:15:12.267 "superblock": true, 00:15:12.267 "num_base_bdevs": 4, 00:15:12.267 "num_base_bdevs_discovered": 1, 00:15:12.267 "num_base_bdevs_operational": 4, 00:15:12.267 "base_bdevs_list": [ 00:15:12.267 { 00:15:12.267 "name": "pt1", 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.267 "is_configured": true, 00:15:12.267 "data_offset": 2048, 00:15:12.267 "data_size": 63488 00:15:12.267 }, 00:15:12.267 { 00:15:12.267 "name": null, 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.267 "is_configured": false, 00:15:12.267 "data_offset": 2048, 00:15:12.267 "data_size": 63488 00:15:12.267 }, 00:15:12.267 { 00:15:12.267 "name": null, 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.267 "is_configured": false, 00:15:12.267 "data_offset": 2048, 00:15:12.268 "data_size": 63488 00:15:12.268 }, 00:15:12.268 { 00:15:12.268 "name": null, 00:15:12.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.268 "is_configured": false, 00:15:12.268 "data_offset": 2048, 00:15:12.268 "data_size": 63488 00:15:12.268 } 00:15:12.268 ] 00:15:12.268 }' 00:15:12.268 18:55:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.268 18:55:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.836 [2024-11-28 18:55:42.236765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.836 [2024-11-28 18:55:42.236813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.836 [2024-11-28 18:55:42.236829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:12.836 [2024-11-28 18:55:42.236838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.836 [2024-11-28 18:55:42.237163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.836 [2024-11-28 18:55:42.237182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.836 [2024-11-28 18:55:42.237234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.836 [2024-11-28 18:55:42.237252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.836 pt2 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.836 [2024-11-28 18:55:42.244773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.836 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.837 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.837 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.837 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.837 "name": "raid_bdev1", 00:15:12.837 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:12.837 "strip_size_kb": 64, 00:15:12.837 "state": "configuring", 00:15:12.837 "raid_level": "raid5f", 00:15:12.837 "superblock": true, 00:15:12.837 "num_base_bdevs": 4, 00:15:12.837 "num_base_bdevs_discovered": 1, 00:15:12.837 "num_base_bdevs_operational": 4, 00:15:12.837 "base_bdevs_list": [ 00:15:12.837 { 00:15:12.837 "name": "pt1", 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.837 "is_configured": true, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 0, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 } 00:15:12.837 ] 00:15:12.837 }' 00:15:12.837 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.837 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.096 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:13.096 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.096 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.356 [2024-11-28 18:55:42.708915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.356 [2024-11-28 18:55:42.709010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.356 [2024-11-28 18:55:42.709044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:13.356 [2024-11-28 18:55:42.709070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.356 [2024-11-28 18:55:42.709449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.356 [2024-11-28 18:55:42.709504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.356 [2024-11-28 18:55:42.709592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.356 [2024-11-28 18:55:42.709640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.356 pt2 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.356 [2024-11-28 18:55:42.720907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:13.356 [2024-11-28 18:55:42.720991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.356 [2024-11-28 18:55:42.721023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:13.356 [2024-11-28 18:55:42.721049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.356 [2024-11-28 18:55:42.721382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.356 [2024-11-28 18:55:42.721450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:13.356 [2024-11-28 18:55:42.721531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:13.356 [2024-11-28 18:55:42.721588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.356 pt3 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.356 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.357 [2024-11-28 18:55:42.732913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:13.357 [2024-11-28 18:55:42.732954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.357 [2024-11-28 18:55:42.732969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:13.357 [2024-11-28 18:55:42.732977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.357 [2024-11-28 18:55:42.733261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.357 [2024-11-28 18:55:42.733277] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:13.357 [2024-11-28 18:55:42.733327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:13.357 [2024-11-28 18:55:42.733342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:13.357 [2024-11-28 18:55:42.733453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:13.357 [2024-11-28 18:55:42.733461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:13.357 [2024-11-28 18:55:42.733703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:13.357 [2024-11-28 18:55:42.734169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:13.357 [2024-11-28 18:55:42.734191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:13.357 [2024-11-28 18:55:42.734285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.357 pt4 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.357 "name": "raid_bdev1", 00:15:13.357 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:13.357 "strip_size_kb": 64, 00:15:13.357 "state": "online", 00:15:13.357 "raid_level": "raid5f", 00:15:13.357 "superblock": true, 00:15:13.357 "num_base_bdevs": 4, 00:15:13.357 "num_base_bdevs_discovered": 4, 00:15:13.357 "num_base_bdevs_operational": 4, 00:15:13.357 "base_bdevs_list": [ 00:15:13.357 { 00:15:13.357 "name": "pt1", 00:15:13.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.357 "is_configured": true, 00:15:13.357 "data_offset": 2048, 00:15:13.357 "data_size": 63488 00:15:13.357 }, 00:15:13.357 { 00:15:13.357 "name": "pt2", 00:15:13.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.357 "is_configured": true, 00:15:13.357 "data_offset": 2048, 00:15:13.357 "data_size": 63488 00:15:13.357 }, 00:15:13.357 { 00:15:13.357 "name": "pt3", 00:15:13.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.357 "is_configured": true, 00:15:13.357 "data_offset": 2048, 00:15:13.357 "data_size": 63488 00:15:13.357 }, 00:15:13.357 { 00:15:13.357 "name": "pt4", 00:15:13.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.357 "is_configured": true, 00:15:13.357 "data_offset": 2048, 00:15:13.357 "data_size": 63488 00:15:13.357 } 00:15:13.357 ] 00:15:13.357 }' 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.357 18:55:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.617 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.617 [2024-11-28 18:55:43.217188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.876 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.876 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.876 "name": "raid_bdev1", 00:15:13.876 "aliases": [ 00:15:13.876 "5868fab7-6bdf-48e7-9442-80f80efaf3a0" 00:15:13.876 ], 00:15:13.876 "product_name": "Raid Volume", 00:15:13.876 "block_size": 512, 00:15:13.876 "num_blocks": 190464, 00:15:13.876 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:13.876 "assigned_rate_limits": { 00:15:13.876 "rw_ios_per_sec": 0, 00:15:13.876 "rw_mbytes_per_sec": 0, 00:15:13.876 "r_mbytes_per_sec": 0, 00:15:13.876 "w_mbytes_per_sec": 0 00:15:13.876 }, 00:15:13.876 "claimed": false, 00:15:13.876 "zoned": false, 00:15:13.876 "supported_io_types": { 00:15:13.876 "read": true, 00:15:13.876 "write": true, 00:15:13.876 "unmap": false, 00:15:13.876 "flush": false, 00:15:13.876 "reset": true, 00:15:13.876 "nvme_admin": false, 00:15:13.876 "nvme_io": false, 00:15:13.876 "nvme_io_md": false, 00:15:13.876 "write_zeroes": true, 00:15:13.876 "zcopy": false, 00:15:13.876 "get_zone_info": false, 00:15:13.876 "zone_management": false, 00:15:13.877 "zone_append": false, 00:15:13.877 "compare": false, 00:15:13.877 "compare_and_write": false, 00:15:13.877 "abort": false, 00:15:13.877 "seek_hole": false, 00:15:13.877 "seek_data": false, 00:15:13.877 "copy": false, 00:15:13.877 "nvme_iov_md": false 00:15:13.877 }, 00:15:13.877 "driver_specific": { 00:15:13.877 "raid": { 00:15:13.877 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:13.877 "strip_size_kb": 64, 00:15:13.877 "state": "online", 00:15:13.877 "raid_level": "raid5f", 00:15:13.877 "superblock": true, 00:15:13.877 "num_base_bdevs": 4, 00:15:13.877 "num_base_bdevs_discovered": 4, 00:15:13.877 "num_base_bdevs_operational": 4, 00:15:13.877 "base_bdevs_list": [ 00:15:13.877 { 00:15:13.877 "name": "pt1", 00:15:13.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.877 "is_configured": true, 00:15:13.877 "data_offset": 2048, 00:15:13.877 "data_size": 63488 00:15:13.877 }, 00:15:13.877 { 00:15:13.877 "name": "pt2", 00:15:13.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.877 "is_configured": true, 00:15:13.877 "data_offset": 2048, 00:15:13.877 "data_size": 63488 00:15:13.877 }, 00:15:13.877 { 00:15:13.877 "name": "pt3", 00:15:13.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.877 "is_configured": true, 00:15:13.877 "data_offset": 2048, 00:15:13.877 "data_size": 63488 00:15:13.877 }, 00:15:13.877 { 00:15:13.877 "name": "pt4", 00:15:13.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.877 "is_configured": true, 00:15:13.877 "data_offset": 2048, 00:15:13.877 "data_size": 63488 00:15:13.877 } 00:15:13.877 ] 00:15:13.877 } 00:15:13.877 } 00:15:13.877 }' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:13.877 pt2 00:15:13.877 pt3 00:15:13.877 pt4' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.877 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.137 [2024-11-28 18:55:43.561285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5868fab7-6bdf-48e7-9442-80f80efaf3a0 '!=' 5868fab7-6bdf-48e7-9442-80f80efaf3a0 ']' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.137 [2024-11-28 18:55:43.613181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.137 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.137 "name": "raid_bdev1", 00:15:14.137 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:14.137 "strip_size_kb": 64, 00:15:14.137 "state": "online", 00:15:14.137 "raid_level": "raid5f", 00:15:14.137 "superblock": true, 00:15:14.137 "num_base_bdevs": 4, 00:15:14.137 "num_base_bdevs_discovered": 3, 00:15:14.137 "num_base_bdevs_operational": 3, 00:15:14.137 "base_bdevs_list": [ 00:15:14.137 { 00:15:14.137 "name": null, 00:15:14.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.137 "is_configured": false, 00:15:14.137 "data_offset": 0, 00:15:14.137 "data_size": 63488 00:15:14.137 }, 00:15:14.137 { 00:15:14.137 "name": "pt2", 00:15:14.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.137 "is_configured": true, 00:15:14.137 "data_offset": 2048, 00:15:14.137 "data_size": 63488 00:15:14.137 }, 00:15:14.137 { 00:15:14.138 "name": "pt3", 00:15:14.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.138 "is_configured": true, 00:15:14.138 "data_offset": 2048, 00:15:14.138 "data_size": 63488 00:15:14.138 }, 00:15:14.138 { 00:15:14.138 "name": "pt4", 00:15:14.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:14.138 "is_configured": true, 00:15:14.138 "data_offset": 2048, 00:15:14.138 "data_size": 63488 00:15:14.138 } 00:15:14.138 ] 00:15:14.138 }' 00:15:14.138 18:55:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.138 18:55:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 [2024-11-28 18:55:44.017258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.708 [2024-11-28 18:55:44.017284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.708 [2024-11-28 18:55:44.017343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.708 [2024-11-28 18:55:44.017407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.708 [2024-11-28 18:55:44.017415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 [2024-11-28 18:55:44.117282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.708 [2024-11-28 18:55:44.117326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.708 [2024-11-28 18:55:44.117343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:14.708 [2024-11-28 18:55:44.117351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.708 [2024-11-28 18:55:44.119547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.708 [2024-11-28 18:55:44.119635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.708 [2024-11-28 18:55:44.119705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:14.708 [2024-11-28 18:55:44.119738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.708 pt2 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.708 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.708 "name": "raid_bdev1", 00:15:14.708 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:14.708 "strip_size_kb": 64, 00:15:14.708 "state": "configuring", 00:15:14.708 "raid_level": "raid5f", 00:15:14.708 "superblock": true, 00:15:14.708 "num_base_bdevs": 4, 00:15:14.708 "num_base_bdevs_discovered": 1, 00:15:14.708 "num_base_bdevs_operational": 3, 00:15:14.708 "base_bdevs_list": [ 00:15:14.708 { 00:15:14.708 "name": null, 00:15:14.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.708 "is_configured": false, 00:15:14.708 "data_offset": 2048, 00:15:14.708 "data_size": 63488 00:15:14.708 }, 00:15:14.708 { 00:15:14.708 "name": "pt2", 00:15:14.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.708 "is_configured": true, 00:15:14.708 "data_offset": 2048, 00:15:14.708 "data_size": 63488 00:15:14.708 }, 00:15:14.708 { 00:15:14.709 "name": null, 00:15:14.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.709 "is_configured": false, 00:15:14.709 "data_offset": 2048, 00:15:14.709 "data_size": 63488 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "name": null, 00:15:14.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:14.709 "is_configured": false, 00:15:14.709 "data_offset": 2048, 00:15:14.709 "data_size": 63488 00:15:14.709 } 00:15:14.709 ] 00:15:14.709 }' 00:15:14.709 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.709 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.279 [2024-11-28 18:55:44.581452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.279 [2024-11-28 18:55:44.581552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.279 [2024-11-28 18:55:44.581598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:15.279 [2024-11-28 18:55:44.581626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.279 [2024-11-28 18:55:44.581975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.279 [2024-11-28 18:55:44.582030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.279 [2024-11-28 18:55:44.582134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:15.279 [2024-11-28 18:55:44.582182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.279 pt3 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.279 "name": "raid_bdev1", 00:15:15.279 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:15.279 "strip_size_kb": 64, 00:15:15.279 "state": "configuring", 00:15:15.279 "raid_level": "raid5f", 00:15:15.279 "superblock": true, 00:15:15.279 "num_base_bdevs": 4, 00:15:15.279 "num_base_bdevs_discovered": 2, 00:15:15.279 "num_base_bdevs_operational": 3, 00:15:15.279 "base_bdevs_list": [ 00:15:15.279 { 00:15:15.279 "name": null, 00:15:15.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.279 "is_configured": false, 00:15:15.279 "data_offset": 2048, 00:15:15.279 "data_size": 63488 00:15:15.279 }, 00:15:15.279 { 00:15:15.279 "name": "pt2", 00:15:15.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.279 "is_configured": true, 00:15:15.279 "data_offset": 2048, 00:15:15.279 "data_size": 63488 00:15:15.279 }, 00:15:15.279 { 00:15:15.279 "name": "pt3", 00:15:15.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.279 "is_configured": true, 00:15:15.279 "data_offset": 2048, 00:15:15.279 "data_size": 63488 00:15:15.279 }, 00:15:15.279 { 00:15:15.279 "name": null, 00:15:15.279 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:15.279 "is_configured": false, 00:15:15.279 "data_offset": 2048, 00:15:15.279 "data_size": 63488 00:15:15.279 } 00:15:15.279 ] 00:15:15.279 }' 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.279 18:55:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.539 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.539 [2024-11-28 18:55:45.037577] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:15.540 [2024-11-28 18:55:45.037667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.540 [2024-11-28 18:55:45.037703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:15.540 [2024-11-28 18:55:45.037729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.540 [2024-11-28 18:55:45.038110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.540 [2024-11-28 18:55:45.038167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:15.540 [2024-11-28 18:55:45.038256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:15.540 [2024-11-28 18:55:45.038303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:15.540 [2024-11-28 18:55:45.038434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:15.540 [2024-11-28 18:55:45.038473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:15.540 [2024-11-28 18:55:45.038725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:15:15.540 [2024-11-28 18:55:45.039255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:15.540 [2024-11-28 18:55:45.039311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:15.540 [2024-11-28 18:55:45.039585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.540 pt4 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.540 "name": "raid_bdev1", 00:15:15.540 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:15.540 "strip_size_kb": 64, 00:15:15.540 "state": "online", 00:15:15.540 "raid_level": "raid5f", 00:15:15.540 "superblock": true, 00:15:15.540 "num_base_bdevs": 4, 00:15:15.540 "num_base_bdevs_discovered": 3, 00:15:15.540 "num_base_bdevs_operational": 3, 00:15:15.540 "base_bdevs_list": [ 00:15:15.540 { 00:15:15.540 "name": null, 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.540 "is_configured": false, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 }, 00:15:15.540 { 00:15:15.540 "name": "pt2", 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.540 "is_configured": true, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 }, 00:15:15.540 { 00:15:15.540 "name": "pt3", 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.540 "is_configured": true, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 }, 00:15:15.540 { 00:15:15.540 "name": "pt4", 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:15.540 "is_configured": true, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 } 00:15:15.540 ] 00:15:15.540 }' 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.540 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.121 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.121 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.121 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.121 [2024-11-28 18:55:45.473691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.121 [2024-11-28 18:55:45.473757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.121 [2024-11-28 18:55:45.473837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.121 [2024-11-28 18:55:45.473937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.121 [2024-11-28 18:55:45.473998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:16.121 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.121 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.122 [2024-11-28 18:55:45.549725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.122 [2024-11-28 18:55:45.549818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.122 [2024-11-28 18:55:45.549852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:16.122 [2024-11-28 18:55:45.549881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.122 [2024-11-28 18:55:45.552127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.122 [2024-11-28 18:55:45.552205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.122 [2024-11-28 18:55:45.552296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:16.122 [2024-11-28 18:55:45.552359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.122 [2024-11-28 18:55:45.552515] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:16.122 [2024-11-28 18:55:45.552583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.122 [2024-11-28 18:55:45.552627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:16.122 [2024-11-28 18:55:45.552718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.122 [2024-11-28 18:55:45.552852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.122 pt1 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.122 "name": "raid_bdev1", 00:15:16.122 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:16.122 "strip_size_kb": 64, 00:15:16.122 "state": "configuring", 00:15:16.122 "raid_level": "raid5f", 00:15:16.122 "superblock": true, 00:15:16.122 "num_base_bdevs": 4, 00:15:16.122 "num_base_bdevs_discovered": 2, 00:15:16.122 "num_base_bdevs_operational": 3, 00:15:16.122 "base_bdevs_list": [ 00:15:16.122 { 00:15:16.122 "name": null, 00:15:16.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.122 "is_configured": false, 00:15:16.122 "data_offset": 2048, 00:15:16.122 "data_size": 63488 00:15:16.122 }, 00:15:16.122 { 00:15:16.122 "name": "pt2", 00:15:16.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.122 "is_configured": true, 00:15:16.122 "data_offset": 2048, 00:15:16.122 "data_size": 63488 00:15:16.122 }, 00:15:16.122 { 00:15:16.122 "name": "pt3", 00:15:16.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.122 "is_configured": true, 00:15:16.122 "data_offset": 2048, 00:15:16.122 "data_size": 63488 00:15:16.122 }, 00:15:16.122 { 00:15:16.122 "name": null, 00:15:16.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.122 "is_configured": false, 00:15:16.122 "data_offset": 2048, 00:15:16.122 "data_size": 63488 00:15:16.122 } 00:15:16.122 ] 00:15:16.122 }' 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.122 18:55:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.731 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.731 [2024-11-28 18:55:46.073856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:16.731 [2024-11-28 18:55:46.073910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.731 [2024-11-28 18:55:46.073930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:16.731 [2024-11-28 18:55:46.073938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.731 [2024-11-28 18:55:46.074300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.731 [2024-11-28 18:55:46.074318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:16.732 [2024-11-28 18:55:46.074381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:16.732 [2024-11-28 18:55:46.074400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:16.732 [2024-11-28 18:55:46.074505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:16.732 [2024-11-28 18:55:46.074514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:16.732 [2024-11-28 18:55:46.074761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:16.732 [2024-11-28 18:55:46.075312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:16.732 [2024-11-28 18:55:46.075339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:16.732 [2024-11-28 18:55:46.075531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.732 pt4 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.732 "name": "raid_bdev1", 00:15:16.732 "uuid": "5868fab7-6bdf-48e7-9442-80f80efaf3a0", 00:15:16.732 "strip_size_kb": 64, 00:15:16.732 "state": "online", 00:15:16.732 "raid_level": "raid5f", 00:15:16.732 "superblock": true, 00:15:16.732 "num_base_bdevs": 4, 00:15:16.732 "num_base_bdevs_discovered": 3, 00:15:16.732 "num_base_bdevs_operational": 3, 00:15:16.732 "base_bdevs_list": [ 00:15:16.732 { 00:15:16.732 "name": null, 00:15:16.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.732 "is_configured": false, 00:15:16.732 "data_offset": 2048, 00:15:16.732 "data_size": 63488 00:15:16.732 }, 00:15:16.732 { 00:15:16.732 "name": "pt2", 00:15:16.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.732 "is_configured": true, 00:15:16.732 "data_offset": 2048, 00:15:16.732 "data_size": 63488 00:15:16.732 }, 00:15:16.732 { 00:15:16.732 "name": "pt3", 00:15:16.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.732 "is_configured": true, 00:15:16.732 "data_offset": 2048, 00:15:16.732 "data_size": 63488 00:15:16.732 }, 00:15:16.732 { 00:15:16.732 "name": "pt4", 00:15:16.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:16.732 "is_configured": true, 00:15:16.732 "data_offset": 2048, 00:15:16.732 "data_size": 63488 00:15:16.732 } 00:15:16.732 ] 00:15:16.732 }' 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.732 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.991 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:16.991 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.991 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.991 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:16.991 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:17.252 [2024-11-28 18:55:46.622184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5868fab7-6bdf-48e7-9442-80f80efaf3a0 '!=' 5868fab7-6bdf-48e7-9442-80f80efaf3a0 ']' 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96036 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96036 ']' 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96036 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96036 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.252 killing process with pid 96036 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96036' 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96036 00:15:17.252 [2024-11-28 18:55:46.706644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.252 [2024-11-28 18:55:46.706712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.252 [2024-11-28 18:55:46.706778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.252 [2024-11-28 18:55:46.706790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:17.252 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96036 00:15:17.252 [2024-11-28 18:55:46.749908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.513 18:55:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:17.513 ************************************ 00:15:17.513 END TEST raid5f_superblock_test 00:15:17.513 ************************************ 00:15:17.513 00:15:17.513 real 0m7.376s 00:15:17.513 user 0m12.372s 00:15:17.513 sys 0m1.663s 00:15:17.513 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.513 18:55:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.513 18:55:47 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:17.513 18:55:47 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:17.513 18:55:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:17.513 18:55:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.513 18:55:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.513 ************************************ 00:15:17.513 START TEST raid5f_rebuild_test 00:15:17.513 ************************************ 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.513 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96510 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96510 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 96510 ']' 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.514 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 [2024-11-28 18:55:47.181163] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:17.774 [2024-11-28 18:55:47.181429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96510 ] 00:15:17.774 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.774 Zero copy mechanism will not be used. 00:15:17.774 [2024-11-28 18:55:47.323699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:17.774 [2024-11-28 18:55:47.362110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.034 [2024-11-28 18:55:47.388743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.034 [2024-11-28 18:55:47.431760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.034 [2024-11-28 18:55:47.431795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 BaseBdev1_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.016627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.605 [2024-11-28 18:55:48.016700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.605 [2024-11-28 18:55:48.016724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.605 [2024-11-28 18:55:48.016737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.605 [2024-11-28 18:55:48.018838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.605 [2024-11-28 18:55:48.018921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.605 BaseBdev1 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 BaseBdev2_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.045386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:18.605 [2024-11-28 18:55:48.045452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.605 [2024-11-28 18:55:48.045470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:18.605 [2024-11-28 18:55:48.045480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.605 [2024-11-28 18:55:48.047494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.605 [2024-11-28 18:55:48.047530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.605 BaseBdev2 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 BaseBdev3_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.074002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:18.605 [2024-11-28 18:55:48.074051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.605 [2024-11-28 18:55:48.074071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:18.605 [2024-11-28 18:55:48.074081] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.605 [2024-11-28 18:55:48.076082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.605 [2024-11-28 18:55:48.076120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.605 BaseBdev3 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 BaseBdev4_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.117374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:18.605 [2024-11-28 18:55:48.117470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.605 [2024-11-28 18:55:48.117503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:18.605 [2024-11-28 18:55:48.117521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.605 [2024-11-28 18:55:48.120760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.605 [2024-11-28 18:55:48.120817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:18.605 BaseBdev4 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 spare_malloc 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 spare_delay 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.158448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.605 [2024-11-28 18:55:48.158490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.605 [2024-11-28 18:55:48.158506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:18.605 [2024-11-28 18:55:48.158515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.605 [2024-11-28 18:55:48.160467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.605 [2024-11-28 18:55:48.160503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.605 spare 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.605 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.605 [2024-11-28 18:55:48.170524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.605 [2024-11-28 18:55:48.172319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.605 [2024-11-28 18:55:48.172377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.606 [2024-11-28 18:55:48.172416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.606 [2024-11-28 18:55:48.172514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:18.606 [2024-11-28 18:55:48.172529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:18.606 [2024-11-28 18:55:48.172791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:18.606 [2024-11-28 18:55:48.173225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:18.606 [2024-11-28 18:55:48.173236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:18.606 [2024-11-28 18:55:48.173347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.606 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.866 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.866 "name": "raid_bdev1", 00:15:18.866 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:18.866 "strip_size_kb": 64, 00:15:18.866 "state": "online", 00:15:18.866 "raid_level": "raid5f", 00:15:18.866 "superblock": false, 00:15:18.866 "num_base_bdevs": 4, 00:15:18.866 "num_base_bdevs_discovered": 4, 00:15:18.866 "num_base_bdevs_operational": 4, 00:15:18.866 "base_bdevs_list": [ 00:15:18.866 { 00:15:18.866 "name": "BaseBdev1", 00:15:18.866 "uuid": "049f2c30-82a4-5140-a99c-a9d53735076e", 00:15:18.866 "is_configured": true, 00:15:18.866 "data_offset": 0, 00:15:18.866 "data_size": 65536 00:15:18.866 }, 00:15:18.866 { 00:15:18.866 "name": "BaseBdev2", 00:15:18.866 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:18.866 "is_configured": true, 00:15:18.866 "data_offset": 0, 00:15:18.866 "data_size": 65536 00:15:18.866 }, 00:15:18.866 { 00:15:18.866 "name": "BaseBdev3", 00:15:18.866 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:18.866 "is_configured": true, 00:15:18.866 "data_offset": 0, 00:15:18.866 "data_size": 65536 00:15:18.866 }, 00:15:18.866 { 00:15:18.866 "name": "BaseBdev4", 00:15:18.866 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:18.866 "is_configured": true, 00:15:18.866 "data_offset": 0, 00:15:18.866 "data_size": 65536 00:15:18.866 } 00:15:18.866 ] 00:15:18.866 }' 00:15:18.866 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.866 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.126 [2024-11-28 18:55:48.655363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.126 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:19.386 [2024-11-28 18:55:48.923350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:19.386 /dev/nbd0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.386 1+0 records in 00:15:19.386 1+0 records out 00:15:19.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418213 s, 9.8 MB/s 00:15:19.386 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:19.646 18:55:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:19.906 512+0 records in 00:15:19.906 512+0 records out 00:15:19.906 100663296 bytes (101 MB, 96 MiB) copied, 0.394364 s, 255 MB/s 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.906 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.166 [2024-11-28 18:55:49.602556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.166 [2024-11-28 18:55:49.630645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.166 "name": "raid_bdev1", 00:15:20.166 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:20.166 "strip_size_kb": 64, 00:15:20.166 "state": "online", 00:15:20.166 "raid_level": "raid5f", 00:15:20.166 "superblock": false, 00:15:20.166 "num_base_bdevs": 4, 00:15:20.166 "num_base_bdevs_discovered": 3, 00:15:20.166 "num_base_bdevs_operational": 3, 00:15:20.166 "base_bdevs_list": [ 00:15:20.166 { 00:15:20.166 "name": null, 00:15:20.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.166 "is_configured": false, 00:15:20.166 "data_offset": 0, 00:15:20.166 "data_size": 65536 00:15:20.166 }, 00:15:20.166 { 00:15:20.166 "name": "BaseBdev2", 00:15:20.166 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:20.166 "is_configured": true, 00:15:20.166 "data_offset": 0, 00:15:20.166 "data_size": 65536 00:15:20.166 }, 00:15:20.166 { 00:15:20.166 "name": "BaseBdev3", 00:15:20.166 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:20.166 "is_configured": true, 00:15:20.166 "data_offset": 0, 00:15:20.166 "data_size": 65536 00:15:20.166 }, 00:15:20.166 { 00:15:20.166 "name": "BaseBdev4", 00:15:20.166 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:20.166 "is_configured": true, 00:15:20.166 "data_offset": 0, 00:15:20.166 "data_size": 65536 00:15:20.166 } 00:15:20.166 ] 00:15:20.166 }' 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.166 18:55:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.736 18:55:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.736 18:55:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.736 18:55:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.736 [2024-11-28 18:55:50.062744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.736 [2024-11-28 18:55:50.066873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:15:20.736 18:55:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.736 18:55:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:20.736 [2024-11-28 18:55:50.069011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.678 "name": "raid_bdev1", 00:15:21.678 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:21.678 "strip_size_kb": 64, 00:15:21.678 "state": "online", 00:15:21.678 "raid_level": "raid5f", 00:15:21.678 "superblock": false, 00:15:21.678 "num_base_bdevs": 4, 00:15:21.678 "num_base_bdevs_discovered": 4, 00:15:21.678 "num_base_bdevs_operational": 4, 00:15:21.678 "process": { 00:15:21.678 "type": "rebuild", 00:15:21.678 "target": "spare", 00:15:21.678 "progress": { 00:15:21.678 "blocks": 19200, 00:15:21.678 "percent": 9 00:15:21.678 } 00:15:21.678 }, 00:15:21.678 "base_bdevs_list": [ 00:15:21.678 { 00:15:21.678 "name": "spare", 00:15:21.678 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:21.678 "is_configured": true, 00:15:21.678 "data_offset": 0, 00:15:21.678 "data_size": 65536 00:15:21.678 }, 00:15:21.678 { 00:15:21.678 "name": "BaseBdev2", 00:15:21.678 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:21.678 "is_configured": true, 00:15:21.678 "data_offset": 0, 00:15:21.678 "data_size": 65536 00:15:21.678 }, 00:15:21.678 { 00:15:21.678 "name": "BaseBdev3", 00:15:21.678 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:21.678 "is_configured": true, 00:15:21.678 "data_offset": 0, 00:15:21.678 "data_size": 65536 00:15:21.678 }, 00:15:21.678 { 00:15:21.678 "name": "BaseBdev4", 00:15:21.678 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:21.678 "is_configured": true, 00:15:21.678 "data_offset": 0, 00:15:21.678 "data_size": 65536 00:15:21.678 } 00:15:21.678 ] 00:15:21.678 }' 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.678 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.678 [2024-11-28 18:55:51.227873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.678 [2024-11-28 18:55:51.276574] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.678 [2024-11-28 18:55:51.276714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.678 [2024-11-28 18:55:51.276758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.678 [2024-11-28 18:55:51.276791] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.938 "name": "raid_bdev1", 00:15:21.938 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:21.938 "strip_size_kb": 64, 00:15:21.938 "state": "online", 00:15:21.938 "raid_level": "raid5f", 00:15:21.938 "superblock": false, 00:15:21.938 "num_base_bdevs": 4, 00:15:21.938 "num_base_bdevs_discovered": 3, 00:15:21.938 "num_base_bdevs_operational": 3, 00:15:21.938 "base_bdevs_list": [ 00:15:21.938 { 00:15:21.938 "name": null, 00:15:21.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.938 "is_configured": false, 00:15:21.938 "data_offset": 0, 00:15:21.938 "data_size": 65536 00:15:21.938 }, 00:15:21.938 { 00:15:21.938 "name": "BaseBdev2", 00:15:21.938 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:21.938 "is_configured": true, 00:15:21.938 "data_offset": 0, 00:15:21.938 "data_size": 65536 00:15:21.938 }, 00:15:21.938 { 00:15:21.938 "name": "BaseBdev3", 00:15:21.938 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:21.938 "is_configured": true, 00:15:21.938 "data_offset": 0, 00:15:21.938 "data_size": 65536 00:15:21.938 }, 00:15:21.938 { 00:15:21.938 "name": "BaseBdev4", 00:15:21.938 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:21.938 "is_configured": true, 00:15:21.938 "data_offset": 0, 00:15:21.938 "data_size": 65536 00:15:21.938 } 00:15:21.938 ] 00:15:21.938 }' 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.938 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.198 "name": "raid_bdev1", 00:15:22.198 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:22.198 "strip_size_kb": 64, 00:15:22.198 "state": "online", 00:15:22.198 "raid_level": "raid5f", 00:15:22.198 "superblock": false, 00:15:22.198 "num_base_bdevs": 4, 00:15:22.198 "num_base_bdevs_discovered": 3, 00:15:22.198 "num_base_bdevs_operational": 3, 00:15:22.198 "base_bdevs_list": [ 00:15:22.198 { 00:15:22.198 "name": null, 00:15:22.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.198 "is_configured": false, 00:15:22.198 "data_offset": 0, 00:15:22.198 "data_size": 65536 00:15:22.198 }, 00:15:22.198 { 00:15:22.198 "name": "BaseBdev2", 00:15:22.198 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:22.198 "is_configured": true, 00:15:22.198 "data_offset": 0, 00:15:22.198 "data_size": 65536 00:15:22.198 }, 00:15:22.198 { 00:15:22.198 "name": "BaseBdev3", 00:15:22.198 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:22.198 "is_configured": true, 00:15:22.198 "data_offset": 0, 00:15:22.198 "data_size": 65536 00:15:22.198 }, 00:15:22.198 { 00:15:22.198 "name": "BaseBdev4", 00:15:22.198 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:22.198 "is_configured": true, 00:15:22.198 "data_offset": 0, 00:15:22.198 "data_size": 65536 00:15:22.198 } 00:15:22.198 ] 00:15:22.198 }' 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.198 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.458 [2024-11-28 18:55:51.830734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.458 [2024-11-28 18:55:51.834402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:15:22.458 [2024-11-28 18:55:51.836644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.458 18:55:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.396 "name": "raid_bdev1", 00:15:23.396 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:23.396 "strip_size_kb": 64, 00:15:23.396 "state": "online", 00:15:23.396 "raid_level": "raid5f", 00:15:23.396 "superblock": false, 00:15:23.396 "num_base_bdevs": 4, 00:15:23.396 "num_base_bdevs_discovered": 4, 00:15:23.396 "num_base_bdevs_operational": 4, 00:15:23.396 "process": { 00:15:23.396 "type": "rebuild", 00:15:23.396 "target": "spare", 00:15:23.396 "progress": { 00:15:23.396 "blocks": 19200, 00:15:23.396 "percent": 9 00:15:23.396 } 00:15:23.396 }, 00:15:23.396 "base_bdevs_list": [ 00:15:23.396 { 00:15:23.396 "name": "spare", 00:15:23.396 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:23.396 "is_configured": true, 00:15:23.396 "data_offset": 0, 00:15:23.396 "data_size": 65536 00:15:23.396 }, 00:15:23.396 { 00:15:23.396 "name": "BaseBdev2", 00:15:23.396 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:23.396 "is_configured": true, 00:15:23.396 "data_offset": 0, 00:15:23.396 "data_size": 65536 00:15:23.396 }, 00:15:23.396 { 00:15:23.396 "name": "BaseBdev3", 00:15:23.396 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:23.396 "is_configured": true, 00:15:23.396 "data_offset": 0, 00:15:23.396 "data_size": 65536 00:15:23.396 }, 00:15:23.396 { 00:15:23.396 "name": "BaseBdev4", 00:15:23.396 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:23.396 "is_configured": true, 00:15:23.396 "data_offset": 0, 00:15:23.396 "data_size": 65536 00:15:23.396 } 00:15:23.396 ] 00:15:23.396 }' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.396 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.656 18:55:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.656 18:55:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.656 18:55:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.656 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.656 18:55:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.656 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.656 "name": "raid_bdev1", 00:15:23.656 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:23.656 "strip_size_kb": 64, 00:15:23.657 "state": "online", 00:15:23.657 "raid_level": "raid5f", 00:15:23.657 "superblock": false, 00:15:23.657 "num_base_bdevs": 4, 00:15:23.657 "num_base_bdevs_discovered": 4, 00:15:23.657 "num_base_bdevs_operational": 4, 00:15:23.657 "process": { 00:15:23.657 "type": "rebuild", 00:15:23.657 "target": "spare", 00:15:23.657 "progress": { 00:15:23.657 "blocks": 21120, 00:15:23.657 "percent": 10 00:15:23.657 } 00:15:23.657 }, 00:15:23.657 "base_bdevs_list": [ 00:15:23.657 { 00:15:23.657 "name": "spare", 00:15:23.657 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 0, 00:15:23.657 "data_size": 65536 00:15:23.657 }, 00:15:23.657 { 00:15:23.657 "name": "BaseBdev2", 00:15:23.657 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 0, 00:15:23.657 "data_size": 65536 00:15:23.657 }, 00:15:23.657 { 00:15:23.657 "name": "BaseBdev3", 00:15:23.657 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 0, 00:15:23.657 "data_size": 65536 00:15:23.657 }, 00:15:23.657 { 00:15:23.657 "name": "BaseBdev4", 00:15:23.657 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 0, 00:15:23.657 "data_size": 65536 00:15:23.657 } 00:15:23.657 ] 00:15:23.657 }' 00:15:23.657 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.657 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.657 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.657 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.657 18:55:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.596 18:55:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.856 "name": "raid_bdev1", 00:15:24.856 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:24.856 "strip_size_kb": 64, 00:15:24.856 "state": "online", 00:15:24.856 "raid_level": "raid5f", 00:15:24.856 "superblock": false, 00:15:24.856 "num_base_bdevs": 4, 00:15:24.856 "num_base_bdevs_discovered": 4, 00:15:24.856 "num_base_bdevs_operational": 4, 00:15:24.856 "process": { 00:15:24.856 "type": "rebuild", 00:15:24.856 "target": "spare", 00:15:24.856 "progress": { 00:15:24.856 "blocks": 44160, 00:15:24.856 "percent": 22 00:15:24.856 } 00:15:24.856 }, 00:15:24.856 "base_bdevs_list": [ 00:15:24.856 { 00:15:24.856 "name": "spare", 00:15:24.856 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:24.856 "is_configured": true, 00:15:24.856 "data_offset": 0, 00:15:24.856 "data_size": 65536 00:15:24.856 }, 00:15:24.856 { 00:15:24.856 "name": "BaseBdev2", 00:15:24.856 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:24.856 "is_configured": true, 00:15:24.856 "data_offset": 0, 00:15:24.856 "data_size": 65536 00:15:24.856 }, 00:15:24.856 { 00:15:24.856 "name": "BaseBdev3", 00:15:24.856 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:24.856 "is_configured": true, 00:15:24.856 "data_offset": 0, 00:15:24.856 "data_size": 65536 00:15:24.856 }, 00:15:24.856 { 00:15:24.856 "name": "BaseBdev4", 00:15:24.856 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:24.856 "is_configured": true, 00:15:24.856 "data_offset": 0, 00:15:24.856 "data_size": 65536 00:15:24.856 } 00:15:24.856 ] 00:15:24.856 }' 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.856 18:55:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.795 "name": "raid_bdev1", 00:15:25.795 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:25.795 "strip_size_kb": 64, 00:15:25.795 "state": "online", 00:15:25.795 "raid_level": "raid5f", 00:15:25.795 "superblock": false, 00:15:25.795 "num_base_bdevs": 4, 00:15:25.795 "num_base_bdevs_discovered": 4, 00:15:25.795 "num_base_bdevs_operational": 4, 00:15:25.795 "process": { 00:15:25.795 "type": "rebuild", 00:15:25.795 "target": "spare", 00:15:25.795 "progress": { 00:15:25.795 "blocks": 65280, 00:15:25.795 "percent": 33 00:15:25.795 } 00:15:25.795 }, 00:15:25.795 "base_bdevs_list": [ 00:15:25.795 { 00:15:25.795 "name": "spare", 00:15:25.795 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:25.795 "is_configured": true, 00:15:25.795 "data_offset": 0, 00:15:25.795 "data_size": 65536 00:15:25.795 }, 00:15:25.795 { 00:15:25.795 "name": "BaseBdev2", 00:15:25.795 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:25.795 "is_configured": true, 00:15:25.795 "data_offset": 0, 00:15:25.795 "data_size": 65536 00:15:25.795 }, 00:15:25.795 { 00:15:25.795 "name": "BaseBdev3", 00:15:25.795 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:25.795 "is_configured": true, 00:15:25.795 "data_offset": 0, 00:15:25.795 "data_size": 65536 00:15:25.795 }, 00:15:25.795 { 00:15:25.795 "name": "BaseBdev4", 00:15:25.795 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:25.795 "is_configured": true, 00:15:25.795 "data_offset": 0, 00:15:25.795 "data_size": 65536 00:15:25.795 } 00:15:25.795 ] 00:15:25.795 }' 00:15:25.795 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.054 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.054 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.054 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.054 18:55:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.992 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.993 "name": "raid_bdev1", 00:15:26.993 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:26.993 "strip_size_kb": 64, 00:15:26.993 "state": "online", 00:15:26.993 "raid_level": "raid5f", 00:15:26.993 "superblock": false, 00:15:26.993 "num_base_bdevs": 4, 00:15:26.993 "num_base_bdevs_discovered": 4, 00:15:26.993 "num_base_bdevs_operational": 4, 00:15:26.993 "process": { 00:15:26.993 "type": "rebuild", 00:15:26.993 "target": "spare", 00:15:26.993 "progress": { 00:15:26.993 "blocks": 88320, 00:15:26.993 "percent": 44 00:15:26.993 } 00:15:26.993 }, 00:15:26.993 "base_bdevs_list": [ 00:15:26.993 { 00:15:26.993 "name": "spare", 00:15:26.993 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:26.993 "is_configured": true, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 }, 00:15:26.993 { 00:15:26.993 "name": "BaseBdev2", 00:15:26.993 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:26.993 "is_configured": true, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 }, 00:15:26.993 { 00:15:26.993 "name": "BaseBdev3", 00:15:26.993 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:26.993 "is_configured": true, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 }, 00:15:26.993 { 00:15:26.993 "name": "BaseBdev4", 00:15:26.993 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:26.993 "is_configured": true, 00:15:26.993 "data_offset": 0, 00:15:26.993 "data_size": 65536 00:15:26.993 } 00:15:26.993 ] 00:15:26.993 }' 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.993 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.252 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.252 18:55:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.190 "name": "raid_bdev1", 00:15:28.190 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:28.190 "strip_size_kb": 64, 00:15:28.190 "state": "online", 00:15:28.190 "raid_level": "raid5f", 00:15:28.190 "superblock": false, 00:15:28.190 "num_base_bdevs": 4, 00:15:28.190 "num_base_bdevs_discovered": 4, 00:15:28.190 "num_base_bdevs_operational": 4, 00:15:28.190 "process": { 00:15:28.190 "type": "rebuild", 00:15:28.190 "target": "spare", 00:15:28.190 "progress": { 00:15:28.190 "blocks": 109440, 00:15:28.190 "percent": 55 00:15:28.190 } 00:15:28.190 }, 00:15:28.190 "base_bdevs_list": [ 00:15:28.190 { 00:15:28.190 "name": "spare", 00:15:28.190 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:28.190 "is_configured": true, 00:15:28.190 "data_offset": 0, 00:15:28.190 "data_size": 65536 00:15:28.190 }, 00:15:28.190 { 00:15:28.190 "name": "BaseBdev2", 00:15:28.190 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:28.190 "is_configured": true, 00:15:28.190 "data_offset": 0, 00:15:28.190 "data_size": 65536 00:15:28.190 }, 00:15:28.190 { 00:15:28.190 "name": "BaseBdev3", 00:15:28.190 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:28.190 "is_configured": true, 00:15:28.190 "data_offset": 0, 00:15:28.190 "data_size": 65536 00:15:28.190 }, 00:15:28.190 { 00:15:28.190 "name": "BaseBdev4", 00:15:28.190 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:28.190 "is_configured": true, 00:15:28.190 "data_offset": 0, 00:15:28.190 "data_size": 65536 00:15:28.190 } 00:15:28.190 ] 00:15:28.190 }' 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.190 18:55:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.569 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.569 "name": "raid_bdev1", 00:15:29.569 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:29.569 "strip_size_kb": 64, 00:15:29.569 "state": "online", 00:15:29.569 "raid_level": "raid5f", 00:15:29.569 "superblock": false, 00:15:29.569 "num_base_bdevs": 4, 00:15:29.569 "num_base_bdevs_discovered": 4, 00:15:29.569 "num_base_bdevs_operational": 4, 00:15:29.569 "process": { 00:15:29.569 "type": "rebuild", 00:15:29.569 "target": "spare", 00:15:29.569 "progress": { 00:15:29.569 "blocks": 130560, 00:15:29.569 "percent": 66 00:15:29.569 } 00:15:29.569 }, 00:15:29.569 "base_bdevs_list": [ 00:15:29.569 { 00:15:29.569 "name": "spare", 00:15:29.569 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:29.569 "is_configured": true, 00:15:29.570 "data_offset": 0, 00:15:29.570 "data_size": 65536 00:15:29.570 }, 00:15:29.570 { 00:15:29.570 "name": "BaseBdev2", 00:15:29.570 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:29.570 "is_configured": true, 00:15:29.570 "data_offset": 0, 00:15:29.570 "data_size": 65536 00:15:29.570 }, 00:15:29.570 { 00:15:29.570 "name": "BaseBdev3", 00:15:29.570 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:29.570 "is_configured": true, 00:15:29.570 "data_offset": 0, 00:15:29.570 "data_size": 65536 00:15:29.570 }, 00:15:29.570 { 00:15:29.570 "name": "BaseBdev4", 00:15:29.570 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:29.570 "is_configured": true, 00:15:29.570 "data_offset": 0, 00:15:29.570 "data_size": 65536 00:15:29.570 } 00:15:29.570 ] 00:15:29.570 }' 00:15:29.570 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.570 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.570 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.570 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.570 18:55:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.508 "name": "raid_bdev1", 00:15:30.508 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:30.508 "strip_size_kb": 64, 00:15:30.508 "state": "online", 00:15:30.508 "raid_level": "raid5f", 00:15:30.508 "superblock": false, 00:15:30.508 "num_base_bdevs": 4, 00:15:30.508 "num_base_bdevs_discovered": 4, 00:15:30.508 "num_base_bdevs_operational": 4, 00:15:30.508 "process": { 00:15:30.508 "type": "rebuild", 00:15:30.508 "target": "spare", 00:15:30.508 "progress": { 00:15:30.508 "blocks": 153600, 00:15:30.508 "percent": 78 00:15:30.508 } 00:15:30.508 }, 00:15:30.508 "base_bdevs_list": [ 00:15:30.508 { 00:15:30.508 "name": "spare", 00:15:30.508 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:30.508 "is_configured": true, 00:15:30.508 "data_offset": 0, 00:15:30.508 "data_size": 65536 00:15:30.508 }, 00:15:30.508 { 00:15:30.508 "name": "BaseBdev2", 00:15:30.508 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:30.508 "is_configured": true, 00:15:30.508 "data_offset": 0, 00:15:30.508 "data_size": 65536 00:15:30.508 }, 00:15:30.508 { 00:15:30.508 "name": "BaseBdev3", 00:15:30.508 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:30.508 "is_configured": true, 00:15:30.508 "data_offset": 0, 00:15:30.508 "data_size": 65536 00:15:30.508 }, 00:15:30.508 { 00:15:30.508 "name": "BaseBdev4", 00:15:30.508 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:30.508 "is_configured": true, 00:15:30.508 "data_offset": 0, 00:15:30.508 "data_size": 65536 00:15:30.508 } 00:15:30.508 ] 00:15:30.508 }' 00:15:30.508 18:55:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.508 18:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.508 18:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.508 18:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.508 18:56:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.448 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.707 "name": "raid_bdev1", 00:15:31.707 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:31.707 "strip_size_kb": 64, 00:15:31.707 "state": "online", 00:15:31.707 "raid_level": "raid5f", 00:15:31.707 "superblock": false, 00:15:31.707 "num_base_bdevs": 4, 00:15:31.707 "num_base_bdevs_discovered": 4, 00:15:31.707 "num_base_bdevs_operational": 4, 00:15:31.707 "process": { 00:15:31.707 "type": "rebuild", 00:15:31.707 "target": "spare", 00:15:31.707 "progress": { 00:15:31.707 "blocks": 174720, 00:15:31.707 "percent": 88 00:15:31.707 } 00:15:31.707 }, 00:15:31.707 "base_bdevs_list": [ 00:15:31.707 { 00:15:31.707 "name": "spare", 00:15:31.707 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:31.707 "is_configured": true, 00:15:31.707 "data_offset": 0, 00:15:31.707 "data_size": 65536 00:15:31.707 }, 00:15:31.707 { 00:15:31.707 "name": "BaseBdev2", 00:15:31.707 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:31.707 "is_configured": true, 00:15:31.707 "data_offset": 0, 00:15:31.707 "data_size": 65536 00:15:31.707 }, 00:15:31.707 { 00:15:31.707 "name": "BaseBdev3", 00:15:31.707 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:31.707 "is_configured": true, 00:15:31.707 "data_offset": 0, 00:15:31.707 "data_size": 65536 00:15:31.707 }, 00:15:31.707 { 00:15:31.707 "name": "BaseBdev4", 00:15:31.707 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:31.707 "is_configured": true, 00:15:31.707 "data_offset": 0, 00:15:31.707 "data_size": 65536 00:15:31.707 } 00:15:31.707 ] 00:15:31.707 }' 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.707 18:56:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.646 [2024-11-28 18:56:02.195608] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:32.646 [2024-11-28 18:56:02.195729] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:32.646 [2024-11-28 18:56:02.195797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.646 "name": "raid_bdev1", 00:15:32.646 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:32.646 "strip_size_kb": 64, 00:15:32.646 "state": "online", 00:15:32.646 "raid_level": "raid5f", 00:15:32.646 "superblock": false, 00:15:32.646 "num_base_bdevs": 4, 00:15:32.646 "num_base_bdevs_discovered": 4, 00:15:32.646 "num_base_bdevs_operational": 4, 00:15:32.646 "process": { 00:15:32.646 "type": "rebuild", 00:15:32.646 "target": "spare", 00:15:32.646 "progress": { 00:15:32.646 "blocks": 195840, 00:15:32.646 "percent": 99 00:15:32.646 } 00:15:32.646 }, 00:15:32.646 "base_bdevs_list": [ 00:15:32.646 { 00:15:32.646 "name": "spare", 00:15:32.646 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:32.646 "is_configured": true, 00:15:32.646 "data_offset": 0, 00:15:32.646 "data_size": 65536 00:15:32.646 }, 00:15:32.646 { 00:15:32.646 "name": "BaseBdev2", 00:15:32.646 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:32.646 "is_configured": true, 00:15:32.646 "data_offset": 0, 00:15:32.646 "data_size": 65536 00:15:32.646 }, 00:15:32.646 { 00:15:32.646 "name": "BaseBdev3", 00:15:32.646 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:32.646 "is_configured": true, 00:15:32.646 "data_offset": 0, 00:15:32.646 "data_size": 65536 00:15:32.646 }, 00:15:32.646 { 00:15:32.646 "name": "BaseBdev4", 00:15:32.646 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:32.646 "is_configured": true, 00:15:32.646 "data_offset": 0, 00:15:32.646 "data_size": 65536 00:15:32.646 } 00:15:32.646 ] 00:15:32.646 }' 00:15:32.646 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.906 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.906 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.906 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.906 18:56:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.846 "name": "raid_bdev1", 00:15:33.846 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:33.846 "strip_size_kb": 64, 00:15:33.846 "state": "online", 00:15:33.846 "raid_level": "raid5f", 00:15:33.846 "superblock": false, 00:15:33.846 "num_base_bdevs": 4, 00:15:33.846 "num_base_bdevs_discovered": 4, 00:15:33.846 "num_base_bdevs_operational": 4, 00:15:33.846 "base_bdevs_list": [ 00:15:33.846 { 00:15:33.846 "name": "spare", 00:15:33.846 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:33.846 "is_configured": true, 00:15:33.846 "data_offset": 0, 00:15:33.846 "data_size": 65536 00:15:33.846 }, 00:15:33.846 { 00:15:33.846 "name": "BaseBdev2", 00:15:33.846 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:33.846 "is_configured": true, 00:15:33.846 "data_offset": 0, 00:15:33.846 "data_size": 65536 00:15:33.846 }, 00:15:33.846 { 00:15:33.846 "name": "BaseBdev3", 00:15:33.846 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:33.846 "is_configured": true, 00:15:33.846 "data_offset": 0, 00:15:33.846 "data_size": 65536 00:15:33.846 }, 00:15:33.846 { 00:15:33.846 "name": "BaseBdev4", 00:15:33.846 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:33.846 "is_configured": true, 00:15:33.846 "data_offset": 0, 00:15:33.846 "data_size": 65536 00:15:33.846 } 00:15:33.846 ] 00:15:33.846 }' 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:33.846 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.106 "name": "raid_bdev1", 00:15:34.106 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:34.106 "strip_size_kb": 64, 00:15:34.106 "state": "online", 00:15:34.106 "raid_level": "raid5f", 00:15:34.106 "superblock": false, 00:15:34.106 "num_base_bdevs": 4, 00:15:34.106 "num_base_bdevs_discovered": 4, 00:15:34.106 "num_base_bdevs_operational": 4, 00:15:34.106 "base_bdevs_list": [ 00:15:34.106 { 00:15:34.106 "name": "spare", 00:15:34.106 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:34.106 "is_configured": true, 00:15:34.106 "data_offset": 0, 00:15:34.106 "data_size": 65536 00:15:34.106 }, 00:15:34.106 { 00:15:34.106 "name": "BaseBdev2", 00:15:34.106 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:34.106 "is_configured": true, 00:15:34.106 "data_offset": 0, 00:15:34.106 "data_size": 65536 00:15:34.106 }, 00:15:34.106 { 00:15:34.106 "name": "BaseBdev3", 00:15:34.106 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:34.106 "is_configured": true, 00:15:34.106 "data_offset": 0, 00:15:34.106 "data_size": 65536 00:15:34.106 }, 00:15:34.106 { 00:15:34.106 "name": "BaseBdev4", 00:15:34.106 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:34.106 "is_configured": true, 00:15:34.106 "data_offset": 0, 00:15:34.106 "data_size": 65536 00:15:34.106 } 00:15:34.106 ] 00:15:34.106 }' 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.106 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.107 "name": "raid_bdev1", 00:15:34.107 "uuid": "1aeac1d4-4687-4cc8-994c-48e5f53b50e9", 00:15:34.107 "strip_size_kb": 64, 00:15:34.107 "state": "online", 00:15:34.107 "raid_level": "raid5f", 00:15:34.107 "superblock": false, 00:15:34.107 "num_base_bdevs": 4, 00:15:34.107 "num_base_bdevs_discovered": 4, 00:15:34.107 "num_base_bdevs_operational": 4, 00:15:34.107 "base_bdevs_list": [ 00:15:34.107 { 00:15:34.107 "name": "spare", 00:15:34.107 "uuid": "e309cf38-7bc8-53aa-8c4c-70d2c1eafb7d", 00:15:34.107 "is_configured": true, 00:15:34.107 "data_offset": 0, 00:15:34.107 "data_size": 65536 00:15:34.107 }, 00:15:34.107 { 00:15:34.107 "name": "BaseBdev2", 00:15:34.107 "uuid": "41a5e569-e8ef-5e62-928c-005033d4288b", 00:15:34.107 "is_configured": true, 00:15:34.107 "data_offset": 0, 00:15:34.107 "data_size": 65536 00:15:34.107 }, 00:15:34.107 { 00:15:34.107 "name": "BaseBdev3", 00:15:34.107 "uuid": "abbc0aa1-cb23-5266-9cd1-bcdf70505ce9", 00:15:34.107 "is_configured": true, 00:15:34.107 "data_offset": 0, 00:15:34.107 "data_size": 65536 00:15:34.107 }, 00:15:34.107 { 00:15:34.107 "name": "BaseBdev4", 00:15:34.107 "uuid": "218ca2a7-7e62-5a5b-b25b-697586a11dcc", 00:15:34.107 "is_configured": true, 00:15:34.107 "data_offset": 0, 00:15:34.107 "data_size": 65536 00:15:34.107 } 00:15:34.107 ] 00:15:34.107 }' 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.107 18:56:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.676 [2024-11-28 18:56:04.077889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.676 [2024-11-28 18:56:04.077941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.676 [2024-11-28 18:56:04.078020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.676 [2024-11-28 18:56:04.078112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.676 [2024-11-28 18:56:04.078121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.676 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:34.935 /dev/nbd0 00:15:34.935 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.935 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.936 1+0 records in 00:15:34.936 1+0 records out 00:15:34.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508403 s, 8.1 MB/s 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.936 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:35.196 /dev/nbd1 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.196 1+0 records in 00:15:35.196 1+0 records out 00:15:35.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350747 s, 11.7 MB/s 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.196 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.456 18:56:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96510 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 96510 ']' 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 96510 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96510 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.714 killing process with pid 96510 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96510' 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 96510 00:15:35.714 Received shutdown signal, test time was about 60.000000 seconds 00:15:35.714 00:15:35.714 Latency(us) 00:15:35.714 [2024-11-28T18:56:05.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.714 [2024-11-28T18:56:05.320Z] =================================================================================================================== 00:15:35.714 [2024-11-28T18:56:05.320Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:35.714 [2024-11-28 18:56:05.208497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.714 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 96510 00:15:35.714 [2024-11-28 18:56:05.257944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:35.974 00:15:35.974 real 0m18.400s 00:15:35.974 user 0m22.292s 00:15:35.974 sys 0m2.302s 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 ************************************ 00:15:35.974 END TEST raid5f_rebuild_test 00:15:35.974 ************************************ 00:15:35.974 18:56:05 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:35.974 18:56:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:35.974 18:56:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.974 18:56:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 ************************************ 00:15:35.974 START TEST raid5f_rebuild_test_sb 00:15:35.974 ************************************ 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97010 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97010 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97010 ']' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.974 18:56:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:36.235 Zero copy mechanism will not be used. 00:15:36.235 [2024-11-28 18:56:05.655247] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:36.235 [2024-11-28 18:56:05.655401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97010 ] 00:15:36.235 [2024-11-28 18:56:05.796549] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.235 [2024-11-28 18:56:05.835766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.494 [2024-11-28 18:56:05.861976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.494 [2024-11-28 18:56:05.904931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.494 [2024-11-28 18:56:05.904972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 BaseBdev1_malloc 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 [2024-11-28 18:56:06.505754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:37.064 [2024-11-28 18:56:06.505811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.064 [2024-11-28 18:56:06.505835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:37.064 [2024-11-28 18:56:06.505848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.064 [2024-11-28 18:56:06.507844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.064 [2024-11-28 18:56:06.507879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.064 BaseBdev1 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 BaseBdev2_malloc 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.064 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 [2024-11-28 18:56:06.534288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:37.064 [2024-11-28 18:56:06.534337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.065 [2024-11-28 18:56:06.534354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:37.065 [2024-11-28 18:56:06.534363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.065 [2024-11-28 18:56:06.536374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.065 [2024-11-28 18:56:06.536407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.065 BaseBdev2 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 BaseBdev3_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 [2024-11-28 18:56:06.562932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:37.065 [2024-11-28 18:56:06.562980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.065 [2024-11-28 18:56:06.562998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:37.065 [2024-11-28 18:56:06.563008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.065 [2024-11-28 18:56:06.565091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.065 [2024-11-28 18:56:06.565126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:37.065 BaseBdev3 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 BaseBdev4_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 [2024-11-28 18:56:06.608313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:37.065 [2024-11-28 18:56:06.608419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.065 [2024-11-28 18:56:06.608485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:37.065 [2024-11-28 18:56:06.608510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.065 [2024-11-28 18:56:06.612575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.065 [2024-11-28 18:56:06.612624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:37.065 BaseBdev4 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 spare_malloc 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 spare_delay 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 [2024-11-28 18:56:06.650588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.065 [2024-11-28 18:56:06.650631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.065 [2024-11-28 18:56:06.650647] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:37.065 [2024-11-28 18:56:06.650657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.065 [2024-11-28 18:56:06.652669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.065 [2024-11-28 18:56:06.652704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.065 spare 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.065 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.065 [2024-11-28 18:56:06.662698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.065 [2024-11-28 18:56:06.664472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.065 [2024-11-28 18:56:06.664537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.065 [2024-11-28 18:56:06.664578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.065 [2024-11-28 18:56:06.664742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:37.065 [2024-11-28 18:56:06.664765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:37.065 [2024-11-28 18:56:06.664995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:37.065 [2024-11-28 18:56:06.665443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:37.065 [2024-11-28 18:56:06.665463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:37.065 [2024-11-28 18:56:06.665570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.325 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.325 "name": "raid_bdev1", 00:15:37.325 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:37.325 "strip_size_kb": 64, 00:15:37.325 "state": "online", 00:15:37.325 "raid_level": "raid5f", 00:15:37.325 "superblock": true, 00:15:37.325 "num_base_bdevs": 4, 00:15:37.325 "num_base_bdevs_discovered": 4, 00:15:37.325 "num_base_bdevs_operational": 4, 00:15:37.325 "base_bdevs_list": [ 00:15:37.325 { 00:15:37.325 "name": "BaseBdev1", 00:15:37.325 "uuid": "20267868-a937-5b91-b755-de1a34147eac", 00:15:37.325 "is_configured": true, 00:15:37.325 "data_offset": 2048, 00:15:37.325 "data_size": 63488 00:15:37.325 }, 00:15:37.325 { 00:15:37.325 "name": "BaseBdev2", 00:15:37.325 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:37.325 "is_configured": true, 00:15:37.325 "data_offset": 2048, 00:15:37.325 "data_size": 63488 00:15:37.325 }, 00:15:37.325 { 00:15:37.325 "name": "BaseBdev3", 00:15:37.325 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:37.325 "is_configured": true, 00:15:37.325 "data_offset": 2048, 00:15:37.325 "data_size": 63488 00:15:37.325 }, 00:15:37.325 { 00:15:37.325 "name": "BaseBdev4", 00:15:37.325 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:37.326 "is_configured": true, 00:15:37.326 "data_offset": 2048, 00:15:37.326 "data_size": 63488 00:15:37.326 } 00:15:37.326 ] 00:15:37.326 }' 00:15:37.326 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.326 18:56:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.586 [2024-11-28 18:56:07.147622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.586 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:37.846 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:37.847 [2024-11-28 18:56:07.399574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:37.847 /dev/nbd0 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.847 1+0 records in 00:15:37.847 1+0 records out 00:15:37.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035549 s, 11.5 MB/s 00:15:37.847 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.106 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:38.107 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:38.366 496+0 records in 00:15:38.366 496+0 records out 00:15:38.366 97517568 bytes (98 MB, 93 MiB) copied, 0.385876 s, 253 MB/s 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.366 18:56:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.627 [2024-11-28 18:56:08.057690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.627 [2024-11-28 18:56:08.087311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.627 "name": "raid_bdev1", 00:15:38.627 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:38.627 "strip_size_kb": 64, 00:15:38.627 "state": "online", 00:15:38.627 "raid_level": "raid5f", 00:15:38.627 "superblock": true, 00:15:38.627 "num_base_bdevs": 4, 00:15:38.627 "num_base_bdevs_discovered": 3, 00:15:38.627 "num_base_bdevs_operational": 3, 00:15:38.627 "base_bdevs_list": [ 00:15:38.627 { 00:15:38.627 "name": null, 00:15:38.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.627 "is_configured": false, 00:15:38.627 "data_offset": 0, 00:15:38.627 "data_size": 63488 00:15:38.627 }, 00:15:38.627 { 00:15:38.627 "name": "BaseBdev2", 00:15:38.627 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:38.627 "is_configured": true, 00:15:38.627 "data_offset": 2048, 00:15:38.627 "data_size": 63488 00:15:38.627 }, 00:15:38.627 { 00:15:38.627 "name": "BaseBdev3", 00:15:38.627 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:38.627 "is_configured": true, 00:15:38.627 "data_offset": 2048, 00:15:38.627 "data_size": 63488 00:15:38.627 }, 00:15:38.627 { 00:15:38.627 "name": "BaseBdev4", 00:15:38.627 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:38.627 "is_configured": true, 00:15:38.627 "data_offset": 2048, 00:15:38.627 "data_size": 63488 00:15:38.627 } 00:15:38.627 ] 00:15:38.627 }' 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.627 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.201 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.201 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.201 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.201 [2024-11-28 18:56:08.559415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.201 [2024-11-28 18:56:08.563579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:15:39.201 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.201 18:56:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:39.201 [2024-11-28 18:56:08.565806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.197 "name": "raid_bdev1", 00:15:40.197 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:40.197 "strip_size_kb": 64, 00:15:40.197 "state": "online", 00:15:40.197 "raid_level": "raid5f", 00:15:40.197 "superblock": true, 00:15:40.197 "num_base_bdevs": 4, 00:15:40.197 "num_base_bdevs_discovered": 4, 00:15:40.197 "num_base_bdevs_operational": 4, 00:15:40.197 "process": { 00:15:40.197 "type": "rebuild", 00:15:40.197 "target": "spare", 00:15:40.197 "progress": { 00:15:40.197 "blocks": 19200, 00:15:40.197 "percent": 10 00:15:40.197 } 00:15:40.197 }, 00:15:40.197 "base_bdevs_list": [ 00:15:40.197 { 00:15:40.197 "name": "spare", 00:15:40.197 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:40.197 "is_configured": true, 00:15:40.197 "data_offset": 2048, 00:15:40.197 "data_size": 63488 00:15:40.197 }, 00:15:40.197 { 00:15:40.197 "name": "BaseBdev2", 00:15:40.197 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:40.197 "is_configured": true, 00:15:40.197 "data_offset": 2048, 00:15:40.197 "data_size": 63488 00:15:40.197 }, 00:15:40.197 { 00:15:40.197 "name": "BaseBdev3", 00:15:40.197 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:40.197 "is_configured": true, 00:15:40.197 "data_offset": 2048, 00:15:40.197 "data_size": 63488 00:15:40.197 }, 00:15:40.197 { 00:15:40.197 "name": "BaseBdev4", 00:15:40.197 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:40.197 "is_configured": true, 00:15:40.197 "data_offset": 2048, 00:15:40.197 "data_size": 63488 00:15:40.197 } 00:15:40.197 ] 00:15:40.197 }' 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.197 [2024-11-28 18:56:09.708656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.197 [2024-11-28 18:56:09.773490] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.197 [2024-11-28 18:56:09.773554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.197 [2024-11-28 18:56:09.773569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.197 [2024-11-28 18:56:09.773581] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.197 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.457 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.457 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.457 "name": "raid_bdev1", 00:15:40.457 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:40.457 "strip_size_kb": 64, 00:15:40.457 "state": "online", 00:15:40.457 "raid_level": "raid5f", 00:15:40.457 "superblock": true, 00:15:40.457 "num_base_bdevs": 4, 00:15:40.457 "num_base_bdevs_discovered": 3, 00:15:40.457 "num_base_bdevs_operational": 3, 00:15:40.457 "base_bdevs_list": [ 00:15:40.457 { 00:15:40.457 "name": null, 00:15:40.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.457 "is_configured": false, 00:15:40.457 "data_offset": 0, 00:15:40.457 "data_size": 63488 00:15:40.457 }, 00:15:40.457 { 00:15:40.457 "name": "BaseBdev2", 00:15:40.457 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:40.457 "is_configured": true, 00:15:40.457 "data_offset": 2048, 00:15:40.457 "data_size": 63488 00:15:40.457 }, 00:15:40.457 { 00:15:40.457 "name": "BaseBdev3", 00:15:40.457 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:40.457 "is_configured": true, 00:15:40.457 "data_offset": 2048, 00:15:40.457 "data_size": 63488 00:15:40.457 }, 00:15:40.457 { 00:15:40.457 "name": "BaseBdev4", 00:15:40.457 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:40.457 "is_configured": true, 00:15:40.457 "data_offset": 2048, 00:15:40.457 "data_size": 63488 00:15:40.457 } 00:15:40.457 ] 00:15:40.457 }' 00:15:40.457 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.457 18:56:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.717 "name": "raid_bdev1", 00:15:40.717 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:40.717 "strip_size_kb": 64, 00:15:40.717 "state": "online", 00:15:40.717 "raid_level": "raid5f", 00:15:40.717 "superblock": true, 00:15:40.717 "num_base_bdevs": 4, 00:15:40.717 "num_base_bdevs_discovered": 3, 00:15:40.717 "num_base_bdevs_operational": 3, 00:15:40.717 "base_bdevs_list": [ 00:15:40.717 { 00:15:40.717 "name": null, 00:15:40.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.717 "is_configured": false, 00:15:40.717 "data_offset": 0, 00:15:40.717 "data_size": 63488 00:15:40.717 }, 00:15:40.717 { 00:15:40.717 "name": "BaseBdev2", 00:15:40.717 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:40.717 "is_configured": true, 00:15:40.717 "data_offset": 2048, 00:15:40.717 "data_size": 63488 00:15:40.717 }, 00:15:40.717 { 00:15:40.717 "name": "BaseBdev3", 00:15:40.717 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:40.717 "is_configured": true, 00:15:40.717 "data_offset": 2048, 00:15:40.717 "data_size": 63488 00:15:40.717 }, 00:15:40.717 { 00:15:40.717 "name": "BaseBdev4", 00:15:40.717 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:40.717 "is_configured": true, 00:15:40.717 "data_offset": 2048, 00:15:40.717 "data_size": 63488 00:15:40.717 } 00:15:40.717 ] 00:15:40.717 }' 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.717 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.977 [2024-11-28 18:56:10.351341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.977 [2024-11-28 18:56:10.355051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:15:40.977 [2024-11-28 18:56:10.357282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.977 18:56:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.917 "name": "raid_bdev1", 00:15:41.917 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:41.917 "strip_size_kb": 64, 00:15:41.917 "state": "online", 00:15:41.917 "raid_level": "raid5f", 00:15:41.917 "superblock": true, 00:15:41.917 "num_base_bdevs": 4, 00:15:41.917 "num_base_bdevs_discovered": 4, 00:15:41.917 "num_base_bdevs_operational": 4, 00:15:41.917 "process": { 00:15:41.917 "type": "rebuild", 00:15:41.917 "target": "spare", 00:15:41.917 "progress": { 00:15:41.917 "blocks": 19200, 00:15:41.917 "percent": 10 00:15:41.917 } 00:15:41.917 }, 00:15:41.917 "base_bdevs_list": [ 00:15:41.917 { 00:15:41.917 "name": "spare", 00:15:41.917 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:41.917 "is_configured": true, 00:15:41.917 "data_offset": 2048, 00:15:41.917 "data_size": 63488 00:15:41.917 }, 00:15:41.917 { 00:15:41.917 "name": "BaseBdev2", 00:15:41.917 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:41.917 "is_configured": true, 00:15:41.917 "data_offset": 2048, 00:15:41.917 "data_size": 63488 00:15:41.917 }, 00:15:41.917 { 00:15:41.917 "name": "BaseBdev3", 00:15:41.917 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:41.917 "is_configured": true, 00:15:41.917 "data_offset": 2048, 00:15:41.917 "data_size": 63488 00:15:41.917 }, 00:15:41.917 { 00:15:41.917 "name": "BaseBdev4", 00:15:41.917 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:41.917 "is_configured": true, 00:15:41.917 "data_offset": 2048, 00:15:41.917 "data_size": 63488 00:15:41.917 } 00:15:41.917 ] 00:15:41.917 }' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:41.917 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.917 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.178 "name": "raid_bdev1", 00:15:42.178 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:42.178 "strip_size_kb": 64, 00:15:42.178 "state": "online", 00:15:42.178 "raid_level": "raid5f", 00:15:42.178 "superblock": true, 00:15:42.178 "num_base_bdevs": 4, 00:15:42.178 "num_base_bdevs_discovered": 4, 00:15:42.178 "num_base_bdevs_operational": 4, 00:15:42.178 "process": { 00:15:42.178 "type": "rebuild", 00:15:42.178 "target": "spare", 00:15:42.178 "progress": { 00:15:42.178 "blocks": 21120, 00:15:42.178 "percent": 11 00:15:42.178 } 00:15:42.178 }, 00:15:42.178 "base_bdevs_list": [ 00:15:42.178 { 00:15:42.178 "name": "spare", 00:15:42.178 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:42.178 "is_configured": true, 00:15:42.178 "data_offset": 2048, 00:15:42.178 "data_size": 63488 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev2", 00:15:42.178 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:42.178 "is_configured": true, 00:15:42.178 "data_offset": 2048, 00:15:42.178 "data_size": 63488 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev3", 00:15:42.178 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:42.178 "is_configured": true, 00:15:42.178 "data_offset": 2048, 00:15:42.178 "data_size": 63488 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev4", 00:15:42.178 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:42.178 "is_configured": true, 00:15:42.178 "data_offset": 2048, 00:15:42.178 "data_size": 63488 00:15:42.178 } 00:15:42.178 ] 00:15:42.178 }' 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.178 18:56:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.118 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.118 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.118 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.118 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.118 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.119 "name": "raid_bdev1", 00:15:43.119 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:43.119 "strip_size_kb": 64, 00:15:43.119 "state": "online", 00:15:43.119 "raid_level": "raid5f", 00:15:43.119 "superblock": true, 00:15:43.119 "num_base_bdevs": 4, 00:15:43.119 "num_base_bdevs_discovered": 4, 00:15:43.119 "num_base_bdevs_operational": 4, 00:15:43.119 "process": { 00:15:43.119 "type": "rebuild", 00:15:43.119 "target": "spare", 00:15:43.119 "progress": { 00:15:43.119 "blocks": 44160, 00:15:43.119 "percent": 23 00:15:43.119 } 00:15:43.119 }, 00:15:43.119 "base_bdevs_list": [ 00:15:43.119 { 00:15:43.119 "name": "spare", 00:15:43.119 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:43.119 "is_configured": true, 00:15:43.119 "data_offset": 2048, 00:15:43.119 "data_size": 63488 00:15:43.119 }, 00:15:43.119 { 00:15:43.119 "name": "BaseBdev2", 00:15:43.119 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:43.119 "is_configured": true, 00:15:43.119 "data_offset": 2048, 00:15:43.119 "data_size": 63488 00:15:43.119 }, 00:15:43.119 { 00:15:43.119 "name": "BaseBdev3", 00:15:43.119 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:43.119 "is_configured": true, 00:15:43.119 "data_offset": 2048, 00:15:43.119 "data_size": 63488 00:15:43.119 }, 00:15:43.119 { 00:15:43.119 "name": "BaseBdev4", 00:15:43.119 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:43.119 "is_configured": true, 00:15:43.119 "data_offset": 2048, 00:15:43.119 "data_size": 63488 00:15:43.119 } 00:15:43.119 ] 00:15:43.119 }' 00:15:43.119 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.379 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.379 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.379 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.379 18:56:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.318 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.318 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.318 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.318 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.318 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.319 "name": "raid_bdev1", 00:15:44.319 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:44.319 "strip_size_kb": 64, 00:15:44.319 "state": "online", 00:15:44.319 "raid_level": "raid5f", 00:15:44.319 "superblock": true, 00:15:44.319 "num_base_bdevs": 4, 00:15:44.319 "num_base_bdevs_discovered": 4, 00:15:44.319 "num_base_bdevs_operational": 4, 00:15:44.319 "process": { 00:15:44.319 "type": "rebuild", 00:15:44.319 "target": "spare", 00:15:44.319 "progress": { 00:15:44.319 "blocks": 65280, 00:15:44.319 "percent": 34 00:15:44.319 } 00:15:44.319 }, 00:15:44.319 "base_bdevs_list": [ 00:15:44.319 { 00:15:44.319 "name": "spare", 00:15:44.319 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:44.319 "is_configured": true, 00:15:44.319 "data_offset": 2048, 00:15:44.319 "data_size": 63488 00:15:44.319 }, 00:15:44.319 { 00:15:44.319 "name": "BaseBdev2", 00:15:44.319 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:44.319 "is_configured": true, 00:15:44.319 "data_offset": 2048, 00:15:44.319 "data_size": 63488 00:15:44.319 }, 00:15:44.319 { 00:15:44.319 "name": "BaseBdev3", 00:15:44.319 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:44.319 "is_configured": true, 00:15:44.319 "data_offset": 2048, 00:15:44.319 "data_size": 63488 00:15:44.319 }, 00:15:44.319 { 00:15:44.319 "name": "BaseBdev4", 00:15:44.319 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:44.319 "is_configured": true, 00:15:44.319 "data_offset": 2048, 00:15:44.319 "data_size": 63488 00:15:44.319 } 00:15:44.319 ] 00:15:44.319 }' 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.319 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.578 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.578 18:56:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 18:56:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.515 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.515 "name": "raid_bdev1", 00:15:45.515 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:45.515 "strip_size_kb": 64, 00:15:45.515 "state": "online", 00:15:45.515 "raid_level": "raid5f", 00:15:45.515 "superblock": true, 00:15:45.515 "num_base_bdevs": 4, 00:15:45.515 "num_base_bdevs_discovered": 4, 00:15:45.515 "num_base_bdevs_operational": 4, 00:15:45.515 "process": { 00:15:45.515 "type": "rebuild", 00:15:45.515 "target": "spare", 00:15:45.515 "progress": { 00:15:45.515 "blocks": 86400, 00:15:45.515 "percent": 45 00:15:45.515 } 00:15:45.515 }, 00:15:45.515 "base_bdevs_list": [ 00:15:45.515 { 00:15:45.515 "name": "spare", 00:15:45.515 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:45.515 "is_configured": true, 00:15:45.515 "data_offset": 2048, 00:15:45.515 "data_size": 63488 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "name": "BaseBdev2", 00:15:45.515 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:45.515 "is_configured": true, 00:15:45.515 "data_offset": 2048, 00:15:45.515 "data_size": 63488 00:15:45.515 }, 00:15:45.515 { 00:15:45.516 "name": "BaseBdev3", 00:15:45.516 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 2048, 00:15:45.516 "data_size": 63488 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": "BaseBdev4", 00:15:45.516 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 2048, 00:15:45.516 "data_size": 63488 00:15:45.516 } 00:15:45.516 ] 00:15:45.516 }' 00:15:45.516 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.516 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.516 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.516 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.516 18:56:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.896 "name": "raid_bdev1", 00:15:46.896 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:46.896 "strip_size_kb": 64, 00:15:46.896 "state": "online", 00:15:46.896 "raid_level": "raid5f", 00:15:46.896 "superblock": true, 00:15:46.896 "num_base_bdevs": 4, 00:15:46.896 "num_base_bdevs_discovered": 4, 00:15:46.896 "num_base_bdevs_operational": 4, 00:15:46.896 "process": { 00:15:46.896 "type": "rebuild", 00:15:46.896 "target": "spare", 00:15:46.896 "progress": { 00:15:46.896 "blocks": 109440, 00:15:46.896 "percent": 57 00:15:46.896 } 00:15:46.896 }, 00:15:46.896 "base_bdevs_list": [ 00:15:46.896 { 00:15:46.896 "name": "spare", 00:15:46.896 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:46.896 "is_configured": true, 00:15:46.896 "data_offset": 2048, 00:15:46.896 "data_size": 63488 00:15:46.896 }, 00:15:46.896 { 00:15:46.896 "name": "BaseBdev2", 00:15:46.896 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:46.896 "is_configured": true, 00:15:46.896 "data_offset": 2048, 00:15:46.896 "data_size": 63488 00:15:46.896 }, 00:15:46.896 { 00:15:46.896 "name": "BaseBdev3", 00:15:46.896 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:46.896 "is_configured": true, 00:15:46.896 "data_offset": 2048, 00:15:46.896 "data_size": 63488 00:15:46.896 }, 00:15:46.896 { 00:15:46.896 "name": "BaseBdev4", 00:15:46.896 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:46.896 "is_configured": true, 00:15:46.896 "data_offset": 2048, 00:15:46.896 "data_size": 63488 00:15:46.896 } 00:15:46.896 ] 00:15:46.896 }' 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.896 18:56:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.837 "name": "raid_bdev1", 00:15:47.837 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:47.837 "strip_size_kb": 64, 00:15:47.837 "state": "online", 00:15:47.837 "raid_level": "raid5f", 00:15:47.837 "superblock": true, 00:15:47.837 "num_base_bdevs": 4, 00:15:47.837 "num_base_bdevs_discovered": 4, 00:15:47.837 "num_base_bdevs_operational": 4, 00:15:47.837 "process": { 00:15:47.837 "type": "rebuild", 00:15:47.837 "target": "spare", 00:15:47.837 "progress": { 00:15:47.837 "blocks": 130560, 00:15:47.837 "percent": 68 00:15:47.837 } 00:15:47.837 }, 00:15:47.837 "base_bdevs_list": [ 00:15:47.837 { 00:15:47.837 "name": "spare", 00:15:47.837 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:47.837 "is_configured": true, 00:15:47.837 "data_offset": 2048, 00:15:47.837 "data_size": 63488 00:15:47.837 }, 00:15:47.837 { 00:15:47.837 "name": "BaseBdev2", 00:15:47.837 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:47.837 "is_configured": true, 00:15:47.837 "data_offset": 2048, 00:15:47.837 "data_size": 63488 00:15:47.837 }, 00:15:47.837 { 00:15:47.837 "name": "BaseBdev3", 00:15:47.837 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:47.837 "is_configured": true, 00:15:47.837 "data_offset": 2048, 00:15:47.837 "data_size": 63488 00:15:47.837 }, 00:15:47.837 { 00:15:47.837 "name": "BaseBdev4", 00:15:47.837 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:47.837 "is_configured": true, 00:15:47.837 "data_offset": 2048, 00:15:47.837 "data_size": 63488 00:15:47.837 } 00:15:47.837 ] 00:15:47.837 }' 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.837 18:56:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.219 "name": "raid_bdev1", 00:15:49.219 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:49.219 "strip_size_kb": 64, 00:15:49.219 "state": "online", 00:15:49.219 "raid_level": "raid5f", 00:15:49.219 "superblock": true, 00:15:49.219 "num_base_bdevs": 4, 00:15:49.219 "num_base_bdevs_discovered": 4, 00:15:49.219 "num_base_bdevs_operational": 4, 00:15:49.219 "process": { 00:15:49.219 "type": "rebuild", 00:15:49.219 "target": "spare", 00:15:49.219 "progress": { 00:15:49.219 "blocks": 153600, 00:15:49.219 "percent": 80 00:15:49.219 } 00:15:49.219 }, 00:15:49.219 "base_bdevs_list": [ 00:15:49.219 { 00:15:49.219 "name": "spare", 00:15:49.219 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:49.219 "is_configured": true, 00:15:49.219 "data_offset": 2048, 00:15:49.219 "data_size": 63488 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "name": "BaseBdev2", 00:15:49.219 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:49.219 "is_configured": true, 00:15:49.219 "data_offset": 2048, 00:15:49.219 "data_size": 63488 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "name": "BaseBdev3", 00:15:49.219 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:49.219 "is_configured": true, 00:15:49.219 "data_offset": 2048, 00:15:49.219 "data_size": 63488 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "name": "BaseBdev4", 00:15:49.219 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:49.219 "is_configured": true, 00:15:49.219 "data_offset": 2048, 00:15:49.219 "data_size": 63488 00:15:49.219 } 00:15:49.219 ] 00:15:49.219 }' 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.219 18:56:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.159 "name": "raid_bdev1", 00:15:50.159 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:50.159 "strip_size_kb": 64, 00:15:50.159 "state": "online", 00:15:50.159 "raid_level": "raid5f", 00:15:50.159 "superblock": true, 00:15:50.159 "num_base_bdevs": 4, 00:15:50.159 "num_base_bdevs_discovered": 4, 00:15:50.159 "num_base_bdevs_operational": 4, 00:15:50.159 "process": { 00:15:50.159 "type": "rebuild", 00:15:50.159 "target": "spare", 00:15:50.159 "progress": { 00:15:50.159 "blocks": 174720, 00:15:50.159 "percent": 91 00:15:50.159 } 00:15:50.159 }, 00:15:50.159 "base_bdevs_list": [ 00:15:50.159 { 00:15:50.159 "name": "spare", 00:15:50.159 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:50.159 "is_configured": true, 00:15:50.159 "data_offset": 2048, 00:15:50.159 "data_size": 63488 00:15:50.159 }, 00:15:50.159 { 00:15:50.159 "name": "BaseBdev2", 00:15:50.159 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:50.159 "is_configured": true, 00:15:50.159 "data_offset": 2048, 00:15:50.159 "data_size": 63488 00:15:50.159 }, 00:15:50.159 { 00:15:50.159 "name": "BaseBdev3", 00:15:50.159 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:50.159 "is_configured": true, 00:15:50.159 "data_offset": 2048, 00:15:50.159 "data_size": 63488 00:15:50.159 }, 00:15:50.159 { 00:15:50.159 "name": "BaseBdev4", 00:15:50.159 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:50.159 "is_configured": true, 00:15:50.159 "data_offset": 2048, 00:15:50.159 "data_size": 63488 00:15:50.159 } 00:15:50.159 ] 00:15:50.159 }' 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.159 18:56:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.099 [2024-11-28 18:56:20.414475] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:51.099 [2024-11-28 18:56:20.414535] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:51.099 [2024-11-28 18:56:20.414641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.358 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.359 "name": "raid_bdev1", 00:15:51.359 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:51.359 "strip_size_kb": 64, 00:15:51.359 "state": "online", 00:15:51.359 "raid_level": "raid5f", 00:15:51.359 "superblock": true, 00:15:51.359 "num_base_bdevs": 4, 00:15:51.359 "num_base_bdevs_discovered": 4, 00:15:51.359 "num_base_bdevs_operational": 4, 00:15:51.359 "base_bdevs_list": [ 00:15:51.359 { 00:15:51.359 "name": "spare", 00:15:51.359 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev2", 00:15:51.359 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev3", 00:15:51.359 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev4", 00:15:51.359 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 } 00:15:51.359 ] 00:15:51.359 }' 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.359 "name": "raid_bdev1", 00:15:51.359 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:51.359 "strip_size_kb": 64, 00:15:51.359 "state": "online", 00:15:51.359 "raid_level": "raid5f", 00:15:51.359 "superblock": true, 00:15:51.359 "num_base_bdevs": 4, 00:15:51.359 "num_base_bdevs_discovered": 4, 00:15:51.359 "num_base_bdevs_operational": 4, 00:15:51.359 "base_bdevs_list": [ 00:15:51.359 { 00:15:51.359 "name": "spare", 00:15:51.359 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev2", 00:15:51.359 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev3", 00:15:51.359 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 }, 00:15:51.359 { 00:15:51.359 "name": "BaseBdev4", 00:15:51.359 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:51.359 "is_configured": true, 00:15:51.359 "data_offset": 2048, 00:15:51.359 "data_size": 63488 00:15:51.359 } 00:15:51.359 ] 00:15:51.359 }' 00:15:51.359 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.618 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.618 18:56:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.618 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.618 "name": "raid_bdev1", 00:15:51.618 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:51.618 "strip_size_kb": 64, 00:15:51.618 "state": "online", 00:15:51.618 "raid_level": "raid5f", 00:15:51.618 "superblock": true, 00:15:51.618 "num_base_bdevs": 4, 00:15:51.618 "num_base_bdevs_discovered": 4, 00:15:51.618 "num_base_bdevs_operational": 4, 00:15:51.618 "base_bdevs_list": [ 00:15:51.618 { 00:15:51.618 "name": "spare", 00:15:51.619 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 2048, 00:15:51.619 "data_size": 63488 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev2", 00:15:51.619 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 2048, 00:15:51.619 "data_size": 63488 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev3", 00:15:51.619 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 2048, 00:15:51.619 "data_size": 63488 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev4", 00:15:51.619 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 2048, 00:15:51.619 "data_size": 63488 00:15:51.619 } 00:15:51.619 ] 00:15:51.619 }' 00:15:51.619 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.619 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.878 [2024-11-28 18:56:21.456843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.878 [2024-11-28 18:56:21.456874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.878 [2024-11-28 18:56:21.456952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.878 [2024-11-28 18:56:21.457044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.878 [2024-11-28 18:56:21.457053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.878 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.137 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:52.137 /dev/nbd0 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.397 1+0 records in 00:15:52.397 1+0 records out 00:15:52.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353444 s, 11.6 MB/s 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.397 18:56:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:52.397 /dev/nbd1 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.657 1+0 records in 00:15:52.657 1+0 records out 00:15:52.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426303 s, 9.6 MB/s 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.657 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.917 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:53.177 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:53.177 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:53.177 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:53.177 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 [2024-11-28 18:56:22.561776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.178 [2024-11-28 18:56:22.561838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.178 [2024-11-28 18:56:22.561862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:53.178 [2024-11-28 18:56:22.561870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.178 [2024-11-28 18:56:22.564185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.178 [2024-11-28 18:56:22.564226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.178 [2024-11-28 18:56:22.564305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:53.178 [2024-11-28 18:56:22.564348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.178 [2024-11-28 18:56:22.564469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.178 [2024-11-28 18:56:22.564554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.178 [2024-11-28 18:56:22.564617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.178 spare 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 [2024-11-28 18:56:22.664698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:53.178 [2024-11-28 18:56:22.664737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.178 [2024-11-28 18:56:22.665024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:15:53.178 [2024-11-28 18:56:22.665511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:53.178 [2024-11-28 18:56:22.665529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:53.178 [2024-11-28 18:56:22.665663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.178 "name": "raid_bdev1", 00:15:53.178 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:53.178 "strip_size_kb": 64, 00:15:53.178 "state": "online", 00:15:53.178 "raid_level": "raid5f", 00:15:53.178 "superblock": true, 00:15:53.178 "num_base_bdevs": 4, 00:15:53.178 "num_base_bdevs_discovered": 4, 00:15:53.178 "num_base_bdevs_operational": 4, 00:15:53.178 "base_bdevs_list": [ 00:15:53.178 { 00:15:53.178 "name": "spare", 00:15:53.178 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:53.178 "is_configured": true, 00:15:53.178 "data_offset": 2048, 00:15:53.178 "data_size": 63488 00:15:53.178 }, 00:15:53.178 { 00:15:53.178 "name": "BaseBdev2", 00:15:53.178 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:53.178 "is_configured": true, 00:15:53.178 "data_offset": 2048, 00:15:53.178 "data_size": 63488 00:15:53.178 }, 00:15:53.178 { 00:15:53.178 "name": "BaseBdev3", 00:15:53.178 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:53.178 "is_configured": true, 00:15:53.178 "data_offset": 2048, 00:15:53.178 "data_size": 63488 00:15:53.178 }, 00:15:53.178 { 00:15:53.178 "name": "BaseBdev4", 00:15:53.178 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:53.178 "is_configured": true, 00:15:53.178 "data_offset": 2048, 00:15:53.178 "data_size": 63488 00:15:53.178 } 00:15:53.178 ] 00:15:53.178 }' 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.178 18:56:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.749 "name": "raid_bdev1", 00:15:53.749 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:53.749 "strip_size_kb": 64, 00:15:53.749 "state": "online", 00:15:53.749 "raid_level": "raid5f", 00:15:53.749 "superblock": true, 00:15:53.749 "num_base_bdevs": 4, 00:15:53.749 "num_base_bdevs_discovered": 4, 00:15:53.749 "num_base_bdevs_operational": 4, 00:15:53.749 "base_bdevs_list": [ 00:15:53.749 { 00:15:53.749 "name": "spare", 00:15:53.749 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:53.749 "is_configured": true, 00:15:53.749 "data_offset": 2048, 00:15:53.749 "data_size": 63488 00:15:53.749 }, 00:15:53.749 { 00:15:53.749 "name": "BaseBdev2", 00:15:53.749 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:53.749 "is_configured": true, 00:15:53.749 "data_offset": 2048, 00:15:53.749 "data_size": 63488 00:15:53.749 }, 00:15:53.749 { 00:15:53.749 "name": "BaseBdev3", 00:15:53.749 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:53.749 "is_configured": true, 00:15:53.749 "data_offset": 2048, 00:15:53.749 "data_size": 63488 00:15:53.749 }, 00:15:53.749 { 00:15:53.749 "name": "BaseBdev4", 00:15:53.749 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:53.749 "is_configured": true, 00:15:53.749 "data_offset": 2048, 00:15:53.749 "data_size": 63488 00:15:53.749 } 00:15:53.749 ] 00:15:53.749 }' 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.749 [2024-11-28 18:56:23.282012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.749 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.750 "name": "raid_bdev1", 00:15:53.750 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:53.750 "strip_size_kb": 64, 00:15:53.750 "state": "online", 00:15:53.750 "raid_level": "raid5f", 00:15:53.750 "superblock": true, 00:15:53.750 "num_base_bdevs": 4, 00:15:53.750 "num_base_bdevs_discovered": 3, 00:15:53.750 "num_base_bdevs_operational": 3, 00:15:53.750 "base_bdevs_list": [ 00:15:53.750 { 00:15:53.750 "name": null, 00:15:53.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.750 "is_configured": false, 00:15:53.750 "data_offset": 0, 00:15:53.750 "data_size": 63488 00:15:53.750 }, 00:15:53.750 { 00:15:53.750 "name": "BaseBdev2", 00:15:53.750 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:53.750 "is_configured": true, 00:15:53.750 "data_offset": 2048, 00:15:53.750 "data_size": 63488 00:15:53.750 }, 00:15:53.750 { 00:15:53.750 "name": "BaseBdev3", 00:15:53.750 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:53.750 "is_configured": true, 00:15:53.750 "data_offset": 2048, 00:15:53.750 "data_size": 63488 00:15:53.750 }, 00:15:53.750 { 00:15:53.750 "name": "BaseBdev4", 00:15:53.750 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:53.750 "is_configured": true, 00:15:53.750 "data_offset": 2048, 00:15:53.750 "data_size": 63488 00:15:53.750 } 00:15:53.750 ] 00:15:53.750 }' 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.750 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.319 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.319 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.319 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.319 [2024-11-28 18:56:23.662144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.319 [2024-11-28 18:56:23.662376] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.319 [2024-11-28 18:56:23.662455] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:54.319 [2024-11-28 18:56:23.662541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.319 [2024-11-28 18:56:23.666611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:15:54.319 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.319 18:56:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:54.319 [2024-11-28 18:56:23.668803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.263 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.264 "name": "raid_bdev1", 00:15:55.264 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:55.264 "strip_size_kb": 64, 00:15:55.264 "state": "online", 00:15:55.264 "raid_level": "raid5f", 00:15:55.264 "superblock": true, 00:15:55.264 "num_base_bdevs": 4, 00:15:55.264 "num_base_bdevs_discovered": 4, 00:15:55.264 "num_base_bdevs_operational": 4, 00:15:55.264 "process": { 00:15:55.264 "type": "rebuild", 00:15:55.264 "target": "spare", 00:15:55.264 "progress": { 00:15:55.264 "blocks": 19200, 00:15:55.264 "percent": 10 00:15:55.264 } 00:15:55.264 }, 00:15:55.264 "base_bdevs_list": [ 00:15:55.264 { 00:15:55.264 "name": "spare", 00:15:55.264 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:55.264 "is_configured": true, 00:15:55.264 "data_offset": 2048, 00:15:55.264 "data_size": 63488 00:15:55.264 }, 00:15:55.264 { 00:15:55.264 "name": "BaseBdev2", 00:15:55.264 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:55.264 "is_configured": true, 00:15:55.264 "data_offset": 2048, 00:15:55.264 "data_size": 63488 00:15:55.264 }, 00:15:55.264 { 00:15:55.264 "name": "BaseBdev3", 00:15:55.264 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:55.264 "is_configured": true, 00:15:55.264 "data_offset": 2048, 00:15:55.264 "data_size": 63488 00:15:55.264 }, 00:15:55.264 { 00:15:55.264 "name": "BaseBdev4", 00:15:55.264 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:55.264 "is_configured": true, 00:15:55.264 "data_offset": 2048, 00:15:55.264 "data_size": 63488 00:15:55.264 } 00:15:55.264 ] 00:15:55.264 }' 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.264 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.264 [2024-11-28 18:56:24.807544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.524 [2024-11-28 18:56:24.876168] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:55.524 [2024-11-28 18:56:24.876229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.524 [2024-11-28 18:56:24.876245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.524 [2024-11-28 18:56:24.876253] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.524 "name": "raid_bdev1", 00:15:55.524 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:55.524 "strip_size_kb": 64, 00:15:55.524 "state": "online", 00:15:55.524 "raid_level": "raid5f", 00:15:55.524 "superblock": true, 00:15:55.524 "num_base_bdevs": 4, 00:15:55.524 "num_base_bdevs_discovered": 3, 00:15:55.524 "num_base_bdevs_operational": 3, 00:15:55.524 "base_bdevs_list": [ 00:15:55.524 { 00:15:55.524 "name": null, 00:15:55.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.524 "is_configured": false, 00:15:55.524 "data_offset": 0, 00:15:55.524 "data_size": 63488 00:15:55.524 }, 00:15:55.524 { 00:15:55.524 "name": "BaseBdev2", 00:15:55.524 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:55.524 "is_configured": true, 00:15:55.524 "data_offset": 2048, 00:15:55.524 "data_size": 63488 00:15:55.524 }, 00:15:55.524 { 00:15:55.524 "name": "BaseBdev3", 00:15:55.524 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:55.524 "is_configured": true, 00:15:55.524 "data_offset": 2048, 00:15:55.524 "data_size": 63488 00:15:55.524 }, 00:15:55.524 { 00:15:55.524 "name": "BaseBdev4", 00:15:55.524 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:55.524 "is_configured": true, 00:15:55.524 "data_offset": 2048, 00:15:55.524 "data_size": 63488 00:15:55.524 } 00:15:55.524 ] 00:15:55.524 }' 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.524 18:56:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.784 18:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.784 18:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.784 18:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.784 [2024-11-28 18:56:25.325568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.784 [2024-11-28 18:56:25.325671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.784 [2024-11-28 18:56:25.325712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:55.784 [2024-11-28 18:56:25.325741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.784 [2024-11-28 18:56:25.326180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.784 [2024-11-28 18:56:25.326245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.784 [2024-11-28 18:56:25.326351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:55.784 [2024-11-28 18:56:25.326397] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.784 [2024-11-28 18:56:25.326485] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:55.784 [2024-11-28 18:56:25.326540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.784 [2024-11-28 18:56:25.329963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:15:55.784 spare 00:15:55.784 18:56:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.784 18:56:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:55.784 [2024-11-28 18:56:25.332144] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.166 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.167 "name": "raid_bdev1", 00:15:57.167 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:57.167 "strip_size_kb": 64, 00:15:57.167 "state": "online", 00:15:57.167 "raid_level": "raid5f", 00:15:57.167 "superblock": true, 00:15:57.167 "num_base_bdevs": 4, 00:15:57.167 "num_base_bdevs_discovered": 4, 00:15:57.167 "num_base_bdevs_operational": 4, 00:15:57.167 "process": { 00:15:57.167 "type": "rebuild", 00:15:57.167 "target": "spare", 00:15:57.167 "progress": { 00:15:57.167 "blocks": 19200, 00:15:57.167 "percent": 10 00:15:57.167 } 00:15:57.167 }, 00:15:57.167 "base_bdevs_list": [ 00:15:57.167 { 00:15:57.167 "name": "spare", 00:15:57.167 "uuid": "bb554cd0-3454-55e7-9b2d-f12998ec172f", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev2", 00:15:57.167 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev3", 00:15:57.167 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev4", 00:15:57.167 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 } 00:15:57.167 ] 00:15:57.167 }' 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.167 [2024-11-28 18:56:26.490924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.167 [2024-11-28 18:56:26.539507] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.167 [2024-11-28 18:56:26.539554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.167 [2024-11-28 18:56:26.539572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.167 [2024-11-28 18:56:26.539578] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.167 "name": "raid_bdev1", 00:15:57.167 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:57.167 "strip_size_kb": 64, 00:15:57.167 "state": "online", 00:15:57.167 "raid_level": "raid5f", 00:15:57.167 "superblock": true, 00:15:57.167 "num_base_bdevs": 4, 00:15:57.167 "num_base_bdevs_discovered": 3, 00:15:57.167 "num_base_bdevs_operational": 3, 00:15:57.167 "base_bdevs_list": [ 00:15:57.167 { 00:15:57.167 "name": null, 00:15:57.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.167 "is_configured": false, 00:15:57.167 "data_offset": 0, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev2", 00:15:57.167 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev3", 00:15:57.167 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 }, 00:15:57.167 { 00:15:57.167 "name": "BaseBdev4", 00:15:57.167 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:57.167 "is_configured": true, 00:15:57.167 "data_offset": 2048, 00:15:57.167 "data_size": 63488 00:15:57.167 } 00:15:57.167 ] 00:15:57.167 }' 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.167 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.427 18:56:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.427 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.427 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.427 "name": "raid_bdev1", 00:15:57.427 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:57.427 "strip_size_kb": 64, 00:15:57.427 "state": "online", 00:15:57.427 "raid_level": "raid5f", 00:15:57.427 "superblock": true, 00:15:57.427 "num_base_bdevs": 4, 00:15:57.427 "num_base_bdevs_discovered": 3, 00:15:57.427 "num_base_bdevs_operational": 3, 00:15:57.427 "base_bdevs_list": [ 00:15:57.427 { 00:15:57.427 "name": null, 00:15:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.427 "is_configured": false, 00:15:57.427 "data_offset": 0, 00:15:57.427 "data_size": 63488 00:15:57.427 }, 00:15:57.427 { 00:15:57.427 "name": "BaseBdev2", 00:15:57.427 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:57.427 "is_configured": true, 00:15:57.427 "data_offset": 2048, 00:15:57.427 "data_size": 63488 00:15:57.427 }, 00:15:57.427 { 00:15:57.427 "name": "BaseBdev3", 00:15:57.427 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:57.427 "is_configured": true, 00:15:57.427 "data_offset": 2048, 00:15:57.427 "data_size": 63488 00:15:57.427 }, 00:15:57.427 { 00:15:57.427 "name": "BaseBdev4", 00:15:57.427 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:57.427 "is_configured": true, 00:15:57.427 "data_offset": 2048, 00:15:57.427 "data_size": 63488 00:15:57.427 } 00:15:57.427 ] 00:15:57.427 }' 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.688 [2024-11-28 18:56:27.124880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:57.688 [2024-11-28 18:56:27.124931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.688 [2024-11-28 18:56:27.124951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:57.688 [2024-11-28 18:56:27.124960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.688 [2024-11-28 18:56:27.125367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.688 [2024-11-28 18:56:27.125383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:57.688 [2024-11-28 18:56:27.125468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:57.688 [2024-11-28 18:56:27.125481] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:57.688 [2024-11-28 18:56:27.125493] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:57.688 [2024-11-28 18:56:27.125502] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:57.688 BaseBdev1 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.688 18:56:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.628 "name": "raid_bdev1", 00:15:58.628 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:58.628 "strip_size_kb": 64, 00:15:58.628 "state": "online", 00:15:58.628 "raid_level": "raid5f", 00:15:58.628 "superblock": true, 00:15:58.628 "num_base_bdevs": 4, 00:15:58.628 "num_base_bdevs_discovered": 3, 00:15:58.628 "num_base_bdevs_operational": 3, 00:15:58.628 "base_bdevs_list": [ 00:15:58.628 { 00:15:58.628 "name": null, 00:15:58.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.628 "is_configured": false, 00:15:58.628 "data_offset": 0, 00:15:58.628 "data_size": 63488 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "name": "BaseBdev2", 00:15:58.628 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:58.628 "is_configured": true, 00:15:58.628 "data_offset": 2048, 00:15:58.628 "data_size": 63488 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "name": "BaseBdev3", 00:15:58.628 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:58.628 "is_configured": true, 00:15:58.628 "data_offset": 2048, 00:15:58.628 "data_size": 63488 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "name": "BaseBdev4", 00:15:58.628 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:58.628 "is_configured": true, 00:15:58.628 "data_offset": 2048, 00:15:58.628 "data_size": 63488 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }' 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.628 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.198 "name": "raid_bdev1", 00:15:59.198 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:15:59.198 "strip_size_kb": 64, 00:15:59.198 "state": "online", 00:15:59.198 "raid_level": "raid5f", 00:15:59.198 "superblock": true, 00:15:59.198 "num_base_bdevs": 4, 00:15:59.198 "num_base_bdevs_discovered": 3, 00:15:59.198 "num_base_bdevs_operational": 3, 00:15:59.198 "base_bdevs_list": [ 00:15:59.198 { 00:15:59.198 "name": null, 00:15:59.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.198 "is_configured": false, 00:15:59.198 "data_offset": 0, 00:15:59.198 "data_size": 63488 00:15:59.198 }, 00:15:59.198 { 00:15:59.198 "name": "BaseBdev2", 00:15:59.198 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:15:59.198 "is_configured": true, 00:15:59.198 "data_offset": 2048, 00:15:59.198 "data_size": 63488 00:15:59.198 }, 00:15:59.198 { 00:15:59.198 "name": "BaseBdev3", 00:15:59.198 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:15:59.198 "is_configured": true, 00:15:59.198 "data_offset": 2048, 00:15:59.198 "data_size": 63488 00:15:59.198 }, 00:15:59.198 { 00:15:59.198 "name": "BaseBdev4", 00:15:59.198 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:15:59.198 "is_configured": true, 00:15:59.198 "data_offset": 2048, 00:15:59.198 "data_size": 63488 00:15:59.198 } 00:15:59.198 ] 00:15:59.198 }' 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.198 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.198 [2024-11-28 18:56:28.677307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.198 [2024-11-28 18:56:28.677448] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:59.198 [2024-11-28 18:56:28.677466] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:59.198 request: 00:15:59.198 { 00:15:59.198 "base_bdev": "BaseBdev1", 00:15:59.198 "raid_bdev": "raid_bdev1", 00:15:59.198 "method": "bdev_raid_add_base_bdev", 00:15:59.199 "req_id": 1 00:15:59.199 } 00:15:59.199 Got JSON-RPC error response 00:15:59.199 response: 00:15:59.199 { 00:15:59.199 "code": -22, 00:15:59.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:59.199 } 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.199 18:56:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.138 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.397 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.397 "name": "raid_bdev1", 00:16:00.397 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:16:00.397 "strip_size_kb": 64, 00:16:00.397 "state": "online", 00:16:00.397 "raid_level": "raid5f", 00:16:00.397 "superblock": true, 00:16:00.397 "num_base_bdevs": 4, 00:16:00.397 "num_base_bdevs_discovered": 3, 00:16:00.397 "num_base_bdevs_operational": 3, 00:16:00.397 "base_bdevs_list": [ 00:16:00.397 { 00:16:00.397 "name": null, 00:16:00.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.397 "is_configured": false, 00:16:00.397 "data_offset": 0, 00:16:00.397 "data_size": 63488 00:16:00.397 }, 00:16:00.397 { 00:16:00.397 "name": "BaseBdev2", 00:16:00.397 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:16:00.397 "is_configured": true, 00:16:00.397 "data_offset": 2048, 00:16:00.397 "data_size": 63488 00:16:00.397 }, 00:16:00.397 { 00:16:00.397 "name": "BaseBdev3", 00:16:00.397 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:16:00.397 "is_configured": true, 00:16:00.397 "data_offset": 2048, 00:16:00.397 "data_size": 63488 00:16:00.397 }, 00:16:00.397 { 00:16:00.397 "name": "BaseBdev4", 00:16:00.397 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:16:00.397 "is_configured": true, 00:16:00.397 "data_offset": 2048, 00:16:00.397 "data_size": 63488 00:16:00.397 } 00:16:00.397 ] 00:16:00.397 }' 00:16:00.397 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.397 18:56:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.656 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.657 "name": "raid_bdev1", 00:16:00.657 "uuid": "6fe06c91-01db-45a0-8e06-20d701a5d700", 00:16:00.657 "strip_size_kb": 64, 00:16:00.657 "state": "online", 00:16:00.657 "raid_level": "raid5f", 00:16:00.657 "superblock": true, 00:16:00.657 "num_base_bdevs": 4, 00:16:00.657 "num_base_bdevs_discovered": 3, 00:16:00.657 "num_base_bdevs_operational": 3, 00:16:00.657 "base_bdevs_list": [ 00:16:00.657 { 00:16:00.657 "name": null, 00:16:00.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.657 "is_configured": false, 00:16:00.657 "data_offset": 0, 00:16:00.657 "data_size": 63488 00:16:00.657 }, 00:16:00.657 { 00:16:00.657 "name": "BaseBdev2", 00:16:00.657 "uuid": "2de4ccc8-357d-590b-9736-23c0b9856901", 00:16:00.657 "is_configured": true, 00:16:00.657 "data_offset": 2048, 00:16:00.657 "data_size": 63488 00:16:00.657 }, 00:16:00.657 { 00:16:00.657 "name": "BaseBdev3", 00:16:00.657 "uuid": "1025b741-66ff-501b-9942-026fc98a4f0a", 00:16:00.657 "is_configured": true, 00:16:00.657 "data_offset": 2048, 00:16:00.657 "data_size": 63488 00:16:00.657 }, 00:16:00.657 { 00:16:00.657 "name": "BaseBdev4", 00:16:00.657 "uuid": "97e40cd4-9c06-560a-9132-5fec64eff09a", 00:16:00.657 "is_configured": true, 00:16:00.657 "data_offset": 2048, 00:16:00.657 "data_size": 63488 00:16:00.657 } 00:16:00.657 ] 00:16:00.657 }' 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.657 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97010 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97010 ']' 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97010 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97010 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.917 killing process with pid 97010 00:16:00.917 Received shutdown signal, test time was about 60.000000 seconds 00:16:00.917 00:16:00.917 Latency(us) 00:16:00.917 [2024-11-28T18:56:30.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.917 [2024-11-28T18:56:30.523Z] =================================================================================================================== 00:16:00.917 [2024-11-28T18:56:30.523Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97010' 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97010 00:16:00.917 [2024-11-28 18:56:30.313376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.917 [2024-11-28 18:56:30.313487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.917 [2024-11-28 18:56:30.313554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.917 [2024-11-28 18:56:30.313566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:00.917 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97010 00:16:00.917 [2024-11-28 18:56:30.363021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.178 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:01.178 00:16:01.178 real 0m25.023s 00:16:01.178 user 0m31.748s 00:16:01.178 sys 0m3.049s 00:16:01.178 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.178 18:56:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.178 ************************************ 00:16:01.178 END TEST raid5f_rebuild_test_sb 00:16:01.178 ************************************ 00:16:01.178 18:56:30 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:01.178 18:56:30 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:01.178 18:56:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:01.178 18:56:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.178 18:56:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.178 ************************************ 00:16:01.178 START TEST raid_state_function_test_sb_4k 00:16:01.178 ************************************ 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=97808 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97808' 00:16:01.178 Process raid pid: 97808 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:01.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 97808 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 97808 ']' 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.178 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.179 18:56:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.179 [2024-11-28 18:56:30.757704] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:01.179 [2024-11-28 18:56:30.757939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.440 [2024-11-28 18:56:30.900090] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.440 [2024-11-28 18:56:30.937341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.440 [2024-11-28 18:56:30.965192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.440 [2024-11-28 18:56:31.010063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.440 [2024-11-28 18:56:31.010181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.010 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.010 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:02.010 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:02.010 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.010 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.010 [2024-11-28 18:56:31.586241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.010 [2024-11-28 18:56:31.586350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.010 [2024-11-28 18:56:31.586390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.010 [2024-11-28 18:56:31.586411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.011 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.270 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.270 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.270 "name": "Existed_Raid", 00:16:02.270 "uuid": "6f00f1af-4a32-4ac8-a737-1a2434f3eec8", 00:16:02.270 "strip_size_kb": 0, 00:16:02.270 "state": "configuring", 00:16:02.270 "raid_level": "raid1", 00:16:02.270 "superblock": true, 00:16:02.270 "num_base_bdevs": 2, 00:16:02.270 "num_base_bdevs_discovered": 0, 00:16:02.270 "num_base_bdevs_operational": 2, 00:16:02.270 "base_bdevs_list": [ 00:16:02.270 { 00:16:02.270 "name": "BaseBdev1", 00:16:02.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.270 "is_configured": false, 00:16:02.270 "data_offset": 0, 00:16:02.270 "data_size": 0 00:16:02.270 }, 00:16:02.270 { 00:16:02.270 "name": "BaseBdev2", 00:16:02.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.270 "is_configured": false, 00:16:02.270 "data_offset": 0, 00:16:02.270 "data_size": 0 00:16:02.270 } 00:16:02.270 ] 00:16:02.270 }' 00:16:02.270 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.271 18:56:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 [2024-11-28 18:56:32.018247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.532 [2024-11-28 18:56:32.018326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 [2024-11-28 18:56:32.026281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.532 [2024-11-28 18:56:32.026350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.532 [2024-11-28 18:56:32.026378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.532 [2024-11-28 18:56:32.026397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 [2024-11-28 18:56:32.043262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.532 BaseBdev1 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.532 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.532 [ 00:16:02.532 { 00:16:02.532 "name": "BaseBdev1", 00:16:02.532 "aliases": [ 00:16:02.532 "cdeedf22-0939-4e37-9e4f-33d2e074618a" 00:16:02.532 ], 00:16:02.532 "product_name": "Malloc disk", 00:16:02.532 "block_size": 4096, 00:16:02.532 "num_blocks": 8192, 00:16:02.532 "uuid": "cdeedf22-0939-4e37-9e4f-33d2e074618a", 00:16:02.532 "assigned_rate_limits": { 00:16:02.532 "rw_ios_per_sec": 0, 00:16:02.532 "rw_mbytes_per_sec": 0, 00:16:02.532 "r_mbytes_per_sec": 0, 00:16:02.532 "w_mbytes_per_sec": 0 00:16:02.532 }, 00:16:02.532 "claimed": true, 00:16:02.532 "claim_type": "exclusive_write", 00:16:02.532 "zoned": false, 00:16:02.532 "supported_io_types": { 00:16:02.532 "read": true, 00:16:02.532 "write": true, 00:16:02.532 "unmap": true, 00:16:02.532 "flush": true, 00:16:02.532 "reset": true, 00:16:02.532 "nvme_admin": false, 00:16:02.532 "nvme_io": false, 00:16:02.532 "nvme_io_md": false, 00:16:02.532 "write_zeroes": true, 00:16:02.532 "zcopy": true, 00:16:02.532 "get_zone_info": false, 00:16:02.532 "zone_management": false, 00:16:02.532 "zone_append": false, 00:16:02.532 "compare": false, 00:16:02.532 "compare_and_write": false, 00:16:02.532 "abort": true, 00:16:02.532 "seek_hole": false, 00:16:02.532 "seek_data": false, 00:16:02.532 "copy": true, 00:16:02.532 "nvme_iov_md": false 00:16:02.532 }, 00:16:02.532 "memory_domains": [ 00:16:02.532 { 00:16:02.532 "dma_device_id": "system", 00:16:02.532 "dma_device_type": 1 00:16:02.532 }, 00:16:02.532 { 00:16:02.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.532 "dma_device_type": 2 00:16:02.532 } 00:16:02.532 ], 00:16:02.532 "driver_specific": {} 00:16:02.532 } 00:16:02.532 ] 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.533 "name": "Existed_Raid", 00:16:02.533 "uuid": "31362de4-ed06-4680-98e3-fa5b3ea60729", 00:16:02.533 "strip_size_kb": 0, 00:16:02.533 "state": "configuring", 00:16:02.533 "raid_level": "raid1", 00:16:02.533 "superblock": true, 00:16:02.533 "num_base_bdevs": 2, 00:16:02.533 "num_base_bdevs_discovered": 1, 00:16:02.533 "num_base_bdevs_operational": 2, 00:16:02.533 "base_bdevs_list": [ 00:16:02.533 { 00:16:02.533 "name": "BaseBdev1", 00:16:02.533 "uuid": "cdeedf22-0939-4e37-9e4f-33d2e074618a", 00:16:02.533 "is_configured": true, 00:16:02.533 "data_offset": 256, 00:16:02.533 "data_size": 7936 00:16:02.533 }, 00:16:02.533 { 00:16:02.533 "name": "BaseBdev2", 00:16:02.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.533 "is_configured": false, 00:16:02.533 "data_offset": 0, 00:16:02.533 "data_size": 0 00:16:02.533 } 00:16:02.533 ] 00:16:02.533 }' 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.533 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 [2024-11-28 18:56:32.511387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.133 [2024-11-28 18:56:32.511444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 [2024-11-28 18:56:32.523445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.133 [2024-11-28 18:56:32.525382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.133 [2024-11-28 18:56:32.525478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.133 "name": "Existed_Raid", 00:16:03.133 "uuid": "001a8b95-4b27-4881-b8d7-8887e8329d38", 00:16:03.133 "strip_size_kb": 0, 00:16:03.133 "state": "configuring", 00:16:03.133 "raid_level": "raid1", 00:16:03.133 "superblock": true, 00:16:03.133 "num_base_bdevs": 2, 00:16:03.133 "num_base_bdevs_discovered": 1, 00:16:03.133 "num_base_bdevs_operational": 2, 00:16:03.133 "base_bdevs_list": [ 00:16:03.133 { 00:16:03.133 "name": "BaseBdev1", 00:16:03.133 "uuid": "cdeedf22-0939-4e37-9e4f-33d2e074618a", 00:16:03.133 "is_configured": true, 00:16:03.133 "data_offset": 256, 00:16:03.133 "data_size": 7936 00:16:03.133 }, 00:16:03.133 { 00:16:03.133 "name": "BaseBdev2", 00:16:03.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.133 "is_configured": false, 00:16:03.133 "data_offset": 0, 00:16:03.133 "data_size": 0 00:16:03.133 } 00:16:03.133 ] 00:16:03.133 }' 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.133 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.406 [2024-11-28 18:56:32.954488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.406 BaseBdev2 00:16:03.406 [2024-11-28 18:56:32.954748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:03.406 [2024-11-28 18:56:32.954776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.406 [2024-11-28 18:56:32.955022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:03.406 [2024-11-28 18:56:32.955178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:03.406 [2024-11-28 18:56:32.955188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:03.406 [2024-11-28 18:56:32.955305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.406 [ 00:16:03.406 { 00:16:03.406 "name": "BaseBdev2", 00:16:03.406 "aliases": [ 00:16:03.406 "dc5cdd34-2007-4480-aff7-98ac98ee5cc6" 00:16:03.406 ], 00:16:03.406 "product_name": "Malloc disk", 00:16:03.406 "block_size": 4096, 00:16:03.406 "num_blocks": 8192, 00:16:03.406 "uuid": "dc5cdd34-2007-4480-aff7-98ac98ee5cc6", 00:16:03.406 "assigned_rate_limits": { 00:16:03.406 "rw_ios_per_sec": 0, 00:16:03.406 "rw_mbytes_per_sec": 0, 00:16:03.406 "r_mbytes_per_sec": 0, 00:16:03.406 "w_mbytes_per_sec": 0 00:16:03.406 }, 00:16:03.406 "claimed": true, 00:16:03.406 "claim_type": "exclusive_write", 00:16:03.406 "zoned": false, 00:16:03.406 "supported_io_types": { 00:16:03.406 "read": true, 00:16:03.406 "write": true, 00:16:03.406 "unmap": true, 00:16:03.406 "flush": true, 00:16:03.406 "reset": true, 00:16:03.406 "nvme_admin": false, 00:16:03.406 "nvme_io": false, 00:16:03.406 "nvme_io_md": false, 00:16:03.406 "write_zeroes": true, 00:16:03.406 "zcopy": true, 00:16:03.406 "get_zone_info": false, 00:16:03.406 "zone_management": false, 00:16:03.406 "zone_append": false, 00:16:03.406 "compare": false, 00:16:03.406 "compare_and_write": false, 00:16:03.406 "abort": true, 00:16:03.406 "seek_hole": false, 00:16:03.406 "seek_data": false, 00:16:03.406 "copy": true, 00:16:03.406 "nvme_iov_md": false 00:16:03.406 }, 00:16:03.406 "memory_domains": [ 00:16:03.406 { 00:16:03.406 "dma_device_id": "system", 00:16:03.406 "dma_device_type": 1 00:16:03.406 }, 00:16:03.406 { 00:16:03.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.406 "dma_device_type": 2 00:16:03.406 } 00:16:03.406 ], 00:16:03.406 "driver_specific": {} 00:16:03.406 } 00:16:03.406 ] 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.406 18:56:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.406 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.406 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.665 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.665 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.665 "name": "Existed_Raid", 00:16:03.665 "uuid": "001a8b95-4b27-4881-b8d7-8887e8329d38", 00:16:03.665 "strip_size_kb": 0, 00:16:03.665 "state": "online", 00:16:03.665 "raid_level": "raid1", 00:16:03.665 "superblock": true, 00:16:03.665 "num_base_bdevs": 2, 00:16:03.665 "num_base_bdevs_discovered": 2, 00:16:03.665 "num_base_bdevs_operational": 2, 00:16:03.665 "base_bdevs_list": [ 00:16:03.665 { 00:16:03.665 "name": "BaseBdev1", 00:16:03.665 "uuid": "cdeedf22-0939-4e37-9e4f-33d2e074618a", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 256, 00:16:03.665 "data_size": 7936 00:16:03.665 }, 00:16:03.665 { 00:16:03.665 "name": "BaseBdev2", 00:16:03.665 "uuid": "dc5cdd34-2007-4480-aff7-98ac98ee5cc6", 00:16:03.665 "is_configured": true, 00:16:03.665 "data_offset": 256, 00:16:03.665 "data_size": 7936 00:16:03.665 } 00:16:03.665 ] 00:16:03.665 }' 00:16:03.665 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.665 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.924 [2024-11-28 18:56:33.438883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.924 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.924 "name": "Existed_Raid", 00:16:03.924 "aliases": [ 00:16:03.924 "001a8b95-4b27-4881-b8d7-8887e8329d38" 00:16:03.924 ], 00:16:03.924 "product_name": "Raid Volume", 00:16:03.924 "block_size": 4096, 00:16:03.924 "num_blocks": 7936, 00:16:03.924 "uuid": "001a8b95-4b27-4881-b8d7-8887e8329d38", 00:16:03.924 "assigned_rate_limits": { 00:16:03.924 "rw_ios_per_sec": 0, 00:16:03.924 "rw_mbytes_per_sec": 0, 00:16:03.924 "r_mbytes_per_sec": 0, 00:16:03.924 "w_mbytes_per_sec": 0 00:16:03.924 }, 00:16:03.924 "claimed": false, 00:16:03.924 "zoned": false, 00:16:03.924 "supported_io_types": { 00:16:03.924 "read": true, 00:16:03.924 "write": true, 00:16:03.924 "unmap": false, 00:16:03.924 "flush": false, 00:16:03.924 "reset": true, 00:16:03.924 "nvme_admin": false, 00:16:03.924 "nvme_io": false, 00:16:03.924 "nvme_io_md": false, 00:16:03.924 "write_zeroes": true, 00:16:03.924 "zcopy": false, 00:16:03.924 "get_zone_info": false, 00:16:03.924 "zone_management": false, 00:16:03.924 "zone_append": false, 00:16:03.924 "compare": false, 00:16:03.924 "compare_and_write": false, 00:16:03.925 "abort": false, 00:16:03.925 "seek_hole": false, 00:16:03.925 "seek_data": false, 00:16:03.925 "copy": false, 00:16:03.925 "nvme_iov_md": false 00:16:03.925 }, 00:16:03.925 "memory_domains": [ 00:16:03.925 { 00:16:03.925 "dma_device_id": "system", 00:16:03.925 "dma_device_type": 1 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.925 "dma_device_type": 2 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "dma_device_id": "system", 00:16:03.925 "dma_device_type": 1 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.925 "dma_device_type": 2 00:16:03.925 } 00:16:03.925 ], 00:16:03.925 "driver_specific": { 00:16:03.925 "raid": { 00:16:03.925 "uuid": "001a8b95-4b27-4881-b8d7-8887e8329d38", 00:16:03.925 "strip_size_kb": 0, 00:16:03.925 "state": "online", 00:16:03.925 "raid_level": "raid1", 00:16:03.925 "superblock": true, 00:16:03.925 "num_base_bdevs": 2, 00:16:03.925 "num_base_bdevs_discovered": 2, 00:16:03.925 "num_base_bdevs_operational": 2, 00:16:03.925 "base_bdevs_list": [ 00:16:03.925 { 00:16:03.925 "name": "BaseBdev1", 00:16:03.925 "uuid": "cdeedf22-0939-4e37-9e4f-33d2e074618a", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 256, 00:16:03.925 "data_size": 7936 00:16:03.925 }, 00:16:03.925 { 00:16:03.925 "name": "BaseBdev2", 00:16:03.925 "uuid": "dc5cdd34-2007-4480-aff7-98ac98ee5cc6", 00:16:03.925 "is_configured": true, 00:16:03.925 "data_offset": 256, 00:16:03.925 "data_size": 7936 00:16:03.925 } 00:16:03.925 ] 00:16:03.925 } 00:16:03.925 } 00:16:03.925 }' 00:16:03.925 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.925 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:03.925 BaseBdev2' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.185 [2024-11-28 18:56:33.682771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.185 "name": "Existed_Raid", 00:16:04.185 "uuid": "001a8b95-4b27-4881-b8d7-8887e8329d38", 00:16:04.185 "strip_size_kb": 0, 00:16:04.185 "state": "online", 00:16:04.185 "raid_level": "raid1", 00:16:04.185 "superblock": true, 00:16:04.185 "num_base_bdevs": 2, 00:16:04.185 "num_base_bdevs_discovered": 1, 00:16:04.185 "num_base_bdevs_operational": 1, 00:16:04.185 "base_bdevs_list": [ 00:16:04.185 { 00:16:04.185 "name": null, 00:16:04.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.185 "is_configured": false, 00:16:04.185 "data_offset": 0, 00:16:04.185 "data_size": 7936 00:16:04.185 }, 00:16:04.185 { 00:16:04.185 "name": "BaseBdev2", 00:16:04.185 "uuid": "dc5cdd34-2007-4480-aff7-98ac98ee5cc6", 00:16:04.185 "is_configured": true, 00:16:04.185 "data_offset": 256, 00:16:04.185 "data_size": 7936 00:16:04.185 } 00:16:04.185 ] 00:16:04.185 }' 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.185 18:56:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.755 [2024-11-28 18:56:34.190209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.755 [2024-11-28 18:56:34.190347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.755 [2024-11-28 18:56:34.202037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.755 [2024-11-28 18:56:34.202182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.755 [2024-11-28 18:56:34.202230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:04.755 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 97808 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 97808 ']' 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 97808 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97808 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97808' 00:16:04.756 killing process with pid 97808 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 97808 00:16:04.756 [2024-11-28 18:56:34.288495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.756 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 97808 00:16:04.756 [2024-11-28 18:56:34.289416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.016 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:05.016 00:16:05.016 real 0m3.866s 00:16:05.016 user 0m6.082s 00:16:05.016 sys 0m0.841s 00:16:05.016 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.016 18:56:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.016 ************************************ 00:16:05.016 END TEST raid_state_function_test_sb_4k 00:16:05.016 ************************************ 00:16:05.016 18:56:34 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:05.016 18:56:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:05.016 18:56:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.016 18:56:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.016 ************************************ 00:16:05.016 START TEST raid_superblock_test_4k 00:16:05.016 ************************************ 00:16:05.016 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:05.016 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:05.016 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:05.016 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:05.016 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98048 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98048 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98048 ']' 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.017 18:56:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.277 [2024-11-28 18:56:34.686451] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:05.277 [2024-11-28 18:56:34.686662] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98048 ] 00:16:05.277 [2024-11-28 18:56:34.821518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:05.277 [2024-11-28 18:56:34.860505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.537 [2024-11-28 18:56:34.887538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.537 [2024-11-28 18:56:34.930906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.537 [2024-11-28 18:56:34.931019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.105 malloc1 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.105 [2024-11-28 18:56:35.523750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.105 [2024-11-28 18:56:35.523874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.105 [2024-11-28 18:56:35.523920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:06.105 [2024-11-28 18:56:35.523963] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.105 [2024-11-28 18:56:35.526020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.105 [2024-11-28 18:56:35.526104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.105 pt1 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.105 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.106 malloc2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.106 [2024-11-28 18:56:35.556337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:06.106 [2024-11-28 18:56:35.556386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.106 [2024-11-28 18:56:35.556404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:06.106 [2024-11-28 18:56:35.556412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.106 [2024-11-28 18:56:35.558384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.106 [2024-11-28 18:56:35.558417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:06.106 pt2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.106 [2024-11-28 18:56:35.568361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.106 [2024-11-28 18:56:35.570151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.106 [2024-11-28 18:56:35.570329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:06.106 [2024-11-28 18:56:35.570374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:06.106 [2024-11-28 18:56:35.570658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:06.106 [2024-11-28 18:56:35.570816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:06.106 [2024-11-28 18:56:35.570860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:06.106 [2024-11-28 18:56:35.571023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.106 "name": "raid_bdev1", 00:16:06.106 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:06.106 "strip_size_kb": 0, 00:16:06.106 "state": "online", 00:16:06.106 "raid_level": "raid1", 00:16:06.106 "superblock": true, 00:16:06.106 "num_base_bdevs": 2, 00:16:06.106 "num_base_bdevs_discovered": 2, 00:16:06.106 "num_base_bdevs_operational": 2, 00:16:06.106 "base_bdevs_list": [ 00:16:06.106 { 00:16:06.106 "name": "pt1", 00:16:06.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.106 "is_configured": true, 00:16:06.106 "data_offset": 256, 00:16:06.106 "data_size": 7936 00:16:06.106 }, 00:16:06.106 { 00:16:06.106 "name": "pt2", 00:16:06.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.106 "is_configured": true, 00:16:06.106 "data_offset": 256, 00:16:06.106 "data_size": 7936 00:16:06.106 } 00:16:06.106 ] 00:16:06.106 }' 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.106 18:56:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.676 [2024-11-28 18:56:36.020768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.676 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.676 "name": "raid_bdev1", 00:16:06.676 "aliases": [ 00:16:06.676 "b90187e1-f92f-4188-bd65-72b6d76519df" 00:16:06.676 ], 00:16:06.676 "product_name": "Raid Volume", 00:16:06.676 "block_size": 4096, 00:16:06.676 "num_blocks": 7936, 00:16:06.676 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:06.676 "assigned_rate_limits": { 00:16:06.676 "rw_ios_per_sec": 0, 00:16:06.676 "rw_mbytes_per_sec": 0, 00:16:06.676 "r_mbytes_per_sec": 0, 00:16:06.676 "w_mbytes_per_sec": 0 00:16:06.676 }, 00:16:06.676 "claimed": false, 00:16:06.676 "zoned": false, 00:16:06.676 "supported_io_types": { 00:16:06.676 "read": true, 00:16:06.676 "write": true, 00:16:06.676 "unmap": false, 00:16:06.676 "flush": false, 00:16:06.676 "reset": true, 00:16:06.676 "nvme_admin": false, 00:16:06.676 "nvme_io": false, 00:16:06.676 "nvme_io_md": false, 00:16:06.676 "write_zeroes": true, 00:16:06.676 "zcopy": false, 00:16:06.676 "get_zone_info": false, 00:16:06.676 "zone_management": false, 00:16:06.676 "zone_append": false, 00:16:06.676 "compare": false, 00:16:06.676 "compare_and_write": false, 00:16:06.676 "abort": false, 00:16:06.676 "seek_hole": false, 00:16:06.676 "seek_data": false, 00:16:06.676 "copy": false, 00:16:06.676 "nvme_iov_md": false 00:16:06.676 }, 00:16:06.676 "memory_domains": [ 00:16:06.676 { 00:16:06.676 "dma_device_id": "system", 00:16:06.676 "dma_device_type": 1 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.676 "dma_device_type": 2 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "dma_device_id": "system", 00:16:06.676 "dma_device_type": 1 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.676 "dma_device_type": 2 00:16:06.676 } 00:16:06.676 ], 00:16:06.676 "driver_specific": { 00:16:06.676 "raid": { 00:16:06.676 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:06.676 "strip_size_kb": 0, 00:16:06.676 "state": "online", 00:16:06.676 "raid_level": "raid1", 00:16:06.676 "superblock": true, 00:16:06.676 "num_base_bdevs": 2, 00:16:06.676 "num_base_bdevs_discovered": 2, 00:16:06.676 "num_base_bdevs_operational": 2, 00:16:06.676 "base_bdevs_list": [ 00:16:06.676 { 00:16:06.676 "name": "pt1", 00:16:06.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.676 "is_configured": true, 00:16:06.676 "data_offset": 256, 00:16:06.676 "data_size": 7936 00:16:06.676 }, 00:16:06.676 { 00:16:06.677 "name": "pt2", 00:16:06.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.677 "is_configured": true, 00:16:06.677 "data_offset": 256, 00:16:06.677 "data_size": 7936 00:16:06.677 } 00:16:06.677 ] 00:16:06.677 } 00:16:06.677 } 00:16:06.677 }' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:06.677 pt2' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.677 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.677 [2024-11-28 18:56:36.272732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b90187e1-f92f-4188-bd65-72b6d76519df 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b90187e1-f92f-4188-bd65-72b6d76519df ']' 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.937 [2024-11-28 18:56:36.316539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.937 [2024-11-28 18:56:36.316600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.937 [2024-11-28 18:56:36.316683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.937 [2024-11-28 18:56:36.316751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.937 [2024-11-28 18:56:36.316785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:06.937 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 [2024-11-28 18:56:36.456593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:06.938 [2024-11-28 18:56:36.458353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:06.938 [2024-11-28 18:56:36.458400] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:06.938 [2024-11-28 18:56:36.458449] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:06.938 [2024-11-28 18:56:36.458463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.938 [2024-11-28 18:56:36.458471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:06.938 request: 00:16:06.938 { 00:16:06.938 "name": "raid_bdev1", 00:16:06.938 "raid_level": "raid1", 00:16:06.938 "base_bdevs": [ 00:16:06.938 "malloc1", 00:16:06.938 "malloc2" 00:16:06.938 ], 00:16:06.938 "superblock": false, 00:16:06.938 "method": "bdev_raid_create", 00:16:06.938 "req_id": 1 00:16:06.938 } 00:16:06.938 Got JSON-RPC error response 00:16:06.938 response: 00:16:06.938 { 00:16:06.938 "code": -17, 00:16:06.938 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:06.938 } 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 [2024-11-28 18:56:36.508596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.938 [2024-11-28 18:56:36.508683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.938 [2024-11-28 18:56:36.508713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:06.938 [2024-11-28 18:56:36.508743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.938 [2024-11-28 18:56:36.510780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.938 [2024-11-28 18:56:36.510851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.938 [2024-11-28 18:56:36.510942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:06.938 [2024-11-28 18:56:36.510999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.938 pt1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.198 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.198 "name": "raid_bdev1", 00:16:07.198 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:07.198 "strip_size_kb": 0, 00:16:07.198 "state": "configuring", 00:16:07.198 "raid_level": "raid1", 00:16:07.198 "superblock": true, 00:16:07.198 "num_base_bdevs": 2, 00:16:07.198 "num_base_bdevs_discovered": 1, 00:16:07.198 "num_base_bdevs_operational": 2, 00:16:07.198 "base_bdevs_list": [ 00:16:07.198 { 00:16:07.198 "name": "pt1", 00:16:07.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.198 "is_configured": true, 00:16:07.198 "data_offset": 256, 00:16:07.198 "data_size": 7936 00:16:07.198 }, 00:16:07.198 { 00:16:07.198 "name": null, 00:16:07.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.198 "is_configured": false, 00:16:07.198 "data_offset": 256, 00:16:07.198 "data_size": 7936 00:16:07.198 } 00:16:07.198 ] 00:16:07.198 }' 00:16:07.198 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.198 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 [2024-11-28 18:56:36.972714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.459 [2024-11-28 18:56:36.972767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.459 [2024-11-28 18:56:36.972784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:07.459 [2024-11-28 18:56:36.972794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.459 [2024-11-28 18:56:36.973109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.459 [2024-11-28 18:56:36.973128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.459 [2024-11-28 18:56:36.973180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:07.459 [2024-11-28 18:56:36.973199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.459 [2024-11-28 18:56:36.973291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:07.459 [2024-11-28 18:56:36.973303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:07.459 [2024-11-28 18:56:36.973535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:07.459 [2024-11-28 18:56:36.973643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:07.459 [2024-11-28 18:56:36.973651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:07.459 [2024-11-28 18:56:36.973739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.459 pt2 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 18:56:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.459 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.459 "name": "raid_bdev1", 00:16:07.459 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:07.459 "strip_size_kb": 0, 00:16:07.459 "state": "online", 00:16:07.459 "raid_level": "raid1", 00:16:07.459 "superblock": true, 00:16:07.459 "num_base_bdevs": 2, 00:16:07.459 "num_base_bdevs_discovered": 2, 00:16:07.459 "num_base_bdevs_operational": 2, 00:16:07.459 "base_bdevs_list": [ 00:16:07.459 { 00:16:07.459 "name": "pt1", 00:16:07.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.459 "is_configured": true, 00:16:07.459 "data_offset": 256, 00:16:07.459 "data_size": 7936 00:16:07.459 }, 00:16:07.459 { 00:16:07.459 "name": "pt2", 00:16:07.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.459 "is_configured": true, 00:16:07.459 "data_offset": 256, 00:16:07.459 "data_size": 7936 00:16:07.459 } 00:16:07.459 ] 00:16:07.459 }' 00:16:07.459 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.459 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.030 [2024-11-28 18:56:37.461056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.030 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.030 "name": "raid_bdev1", 00:16:08.030 "aliases": [ 00:16:08.030 "b90187e1-f92f-4188-bd65-72b6d76519df" 00:16:08.030 ], 00:16:08.030 "product_name": "Raid Volume", 00:16:08.030 "block_size": 4096, 00:16:08.030 "num_blocks": 7936, 00:16:08.030 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:08.030 "assigned_rate_limits": { 00:16:08.030 "rw_ios_per_sec": 0, 00:16:08.030 "rw_mbytes_per_sec": 0, 00:16:08.030 "r_mbytes_per_sec": 0, 00:16:08.030 "w_mbytes_per_sec": 0 00:16:08.030 }, 00:16:08.030 "claimed": false, 00:16:08.030 "zoned": false, 00:16:08.030 "supported_io_types": { 00:16:08.030 "read": true, 00:16:08.030 "write": true, 00:16:08.030 "unmap": false, 00:16:08.030 "flush": false, 00:16:08.030 "reset": true, 00:16:08.030 "nvme_admin": false, 00:16:08.030 "nvme_io": false, 00:16:08.030 "nvme_io_md": false, 00:16:08.030 "write_zeroes": true, 00:16:08.030 "zcopy": false, 00:16:08.030 "get_zone_info": false, 00:16:08.030 "zone_management": false, 00:16:08.030 "zone_append": false, 00:16:08.030 "compare": false, 00:16:08.030 "compare_and_write": false, 00:16:08.030 "abort": false, 00:16:08.030 "seek_hole": false, 00:16:08.030 "seek_data": false, 00:16:08.030 "copy": false, 00:16:08.030 "nvme_iov_md": false 00:16:08.031 }, 00:16:08.031 "memory_domains": [ 00:16:08.031 { 00:16:08.031 "dma_device_id": "system", 00:16:08.031 "dma_device_type": 1 00:16:08.031 }, 00:16:08.031 { 00:16:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.031 "dma_device_type": 2 00:16:08.031 }, 00:16:08.031 { 00:16:08.031 "dma_device_id": "system", 00:16:08.031 "dma_device_type": 1 00:16:08.031 }, 00:16:08.031 { 00:16:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.031 "dma_device_type": 2 00:16:08.031 } 00:16:08.031 ], 00:16:08.031 "driver_specific": { 00:16:08.031 "raid": { 00:16:08.031 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:08.031 "strip_size_kb": 0, 00:16:08.031 "state": "online", 00:16:08.031 "raid_level": "raid1", 00:16:08.031 "superblock": true, 00:16:08.031 "num_base_bdevs": 2, 00:16:08.031 "num_base_bdevs_discovered": 2, 00:16:08.031 "num_base_bdevs_operational": 2, 00:16:08.031 "base_bdevs_list": [ 00:16:08.031 { 00:16:08.031 "name": "pt1", 00:16:08.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.031 "is_configured": true, 00:16:08.031 "data_offset": 256, 00:16:08.031 "data_size": 7936 00:16:08.031 }, 00:16:08.031 { 00:16:08.031 "name": "pt2", 00:16:08.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.031 "is_configured": true, 00:16:08.031 "data_offset": 256, 00:16:08.031 "data_size": 7936 00:16:08.031 } 00:16:08.031 ] 00:16:08.031 } 00:16:08.031 } 00:16:08.031 }' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:08.031 pt2' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.031 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.291 [2024-11-28 18:56:37.689117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b90187e1-f92f-4188-bd65-72b6d76519df '!=' b90187e1-f92f-4188-bd65-72b6d76519df ']' 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.291 [2024-11-28 18:56:37.736908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.291 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.292 "name": "raid_bdev1", 00:16:08.292 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:08.292 "strip_size_kb": 0, 00:16:08.292 "state": "online", 00:16:08.292 "raid_level": "raid1", 00:16:08.292 "superblock": true, 00:16:08.292 "num_base_bdevs": 2, 00:16:08.292 "num_base_bdevs_discovered": 1, 00:16:08.292 "num_base_bdevs_operational": 1, 00:16:08.292 "base_bdevs_list": [ 00:16:08.292 { 00:16:08.292 "name": null, 00:16:08.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.292 "is_configured": false, 00:16:08.292 "data_offset": 0, 00:16:08.292 "data_size": 7936 00:16:08.292 }, 00:16:08.292 { 00:16:08.292 "name": "pt2", 00:16:08.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.292 "is_configured": true, 00:16:08.292 "data_offset": 256, 00:16:08.292 "data_size": 7936 00:16:08.292 } 00:16:08.292 ] 00:16:08.292 }' 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.292 18:56:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.862 [2024-11-28 18:56:38.213036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.862 [2024-11-28 18:56:38.213102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.862 [2024-11-28 18:56:38.213203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.862 [2024-11-28 18:56:38.213254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.862 [2024-11-28 18:56:38.213303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:08.862 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.863 [2024-11-28 18:56:38.285069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.863 [2024-11-28 18:56:38.285170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.863 [2024-11-28 18:56:38.285211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:08.863 [2024-11-28 18:56:38.285238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.863 [2024-11-28 18:56:38.287293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.863 [2024-11-28 18:56:38.287370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.863 [2024-11-28 18:56:38.287460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.863 [2024-11-28 18:56:38.287522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.863 [2024-11-28 18:56:38.287615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:08.863 [2024-11-28 18:56:38.287654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:08.863 [2024-11-28 18:56:38.287908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:08.863 [2024-11-28 18:56:38.288067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:08.863 [2024-11-28 18:56:38.288109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:08.863 [2024-11-28 18:56:38.288244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.863 pt2 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.863 "name": "raid_bdev1", 00:16:08.863 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:08.863 "strip_size_kb": 0, 00:16:08.863 "state": "online", 00:16:08.863 "raid_level": "raid1", 00:16:08.863 "superblock": true, 00:16:08.863 "num_base_bdevs": 2, 00:16:08.863 "num_base_bdevs_discovered": 1, 00:16:08.863 "num_base_bdevs_operational": 1, 00:16:08.863 "base_bdevs_list": [ 00:16:08.863 { 00:16:08.863 "name": null, 00:16:08.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.863 "is_configured": false, 00:16:08.863 "data_offset": 256, 00:16:08.863 "data_size": 7936 00:16:08.863 }, 00:16:08.863 { 00:16:08.863 "name": "pt2", 00:16:08.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.863 "is_configured": true, 00:16:08.863 "data_offset": 256, 00:16:08.863 "data_size": 7936 00:16:08.863 } 00:16:08.863 ] 00:16:08.863 }' 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.863 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.434 [2024-11-28 18:56:38.745200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.434 [2024-11-28 18:56:38.745265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.434 [2024-11-28 18:56:38.745313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.434 [2024-11-28 18:56:38.745348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.434 [2024-11-28 18:56:38.745357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.434 [2024-11-28 18:56:38.805201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:09.434 [2024-11-28 18:56:38.805282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.434 [2024-11-28 18:56:38.805316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:09.434 [2024-11-28 18:56:38.805341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.434 [2024-11-28 18:56:38.807354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.434 [2024-11-28 18:56:38.807423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:09.434 [2024-11-28 18:56:38.807509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:09.434 [2024-11-28 18:56:38.807568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.434 [2024-11-28 18:56:38.807699] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:09.434 [2024-11-28 18:56:38.807753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.434 [2024-11-28 18:56:38.807791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:09.434 [2024-11-28 18:56:38.807866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.434 [2024-11-28 18:56:38.807969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:09.434 [2024-11-28 18:56:38.808006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:09.434 [2024-11-28 18:56:38.808226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:09.434 [2024-11-28 18:56:38.808374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:09.434 [2024-11-28 18:56:38.808419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:09.434 [2024-11-28 18:56:38.808564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.434 pt1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.434 "name": "raid_bdev1", 00:16:09.434 "uuid": "b90187e1-f92f-4188-bd65-72b6d76519df", 00:16:09.434 "strip_size_kb": 0, 00:16:09.434 "state": "online", 00:16:09.434 "raid_level": "raid1", 00:16:09.434 "superblock": true, 00:16:09.434 "num_base_bdevs": 2, 00:16:09.434 "num_base_bdevs_discovered": 1, 00:16:09.434 "num_base_bdevs_operational": 1, 00:16:09.434 "base_bdevs_list": [ 00:16:09.434 { 00:16:09.434 "name": null, 00:16:09.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.434 "is_configured": false, 00:16:09.434 "data_offset": 256, 00:16:09.434 "data_size": 7936 00:16:09.434 }, 00:16:09.434 { 00:16:09.434 "name": "pt2", 00:16:09.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.434 "is_configured": true, 00:16:09.434 "data_offset": 256, 00:16:09.434 "data_size": 7936 00:16:09.434 } 00:16:09.434 ] 00:16:09.434 }' 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.434 18:56:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.694 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:09.694 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:09.694 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.695 [2024-11-28 18:56:39.269550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b90187e1-f92f-4188-bd65-72b6d76519df '!=' b90187e1-f92f-4188-bd65-72b6d76519df ']' 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98048 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98048 ']' 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98048 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:09.695 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98048 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.955 killing process with pid 98048 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98048' 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98048 00:16:09.955 [2024-11-28 18:56:39.333097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.955 [2024-11-28 18:56:39.333157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.955 [2024-11-28 18:56:39.333191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.955 [2024-11-28 18:56:39.333201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:09.955 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98048 00:16:09.955 [2024-11-28 18:56:39.356040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.216 18:56:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:10.216 00:16:10.216 real 0m4.986s 00:16:10.216 user 0m8.095s 00:16:10.216 sys 0m1.160s 00:16:10.216 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.216 ************************************ 00:16:10.216 END TEST raid_superblock_test_4k 00:16:10.216 ************************************ 00:16:10.216 18:56:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.216 18:56:39 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:10.216 18:56:39 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:10.216 18:56:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:10.216 18:56:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.216 18:56:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.216 ************************************ 00:16:10.216 START TEST raid_rebuild_test_sb_4k 00:16:10.216 ************************************ 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98361 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98361 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98361 ']' 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.216 18:56:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.216 Zero copy mechanism will not be used. 00:16:10.216 [2024-11-28 18:56:39.765871] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:10.216 [2024-11-28 18:56:39.765988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98361 ] 00:16:10.477 [2024-11-28 18:56:39.900357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.477 [2024-11-28 18:56:39.939417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.477 [2024-11-28 18:56:39.965850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.477 [2024-11-28 18:56:40.008921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.477 [2024-11-28 18:56:40.009051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 BaseBdev1_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 [2024-11-28 18:56:40.589989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.048 [2024-11-28 18:56:40.590048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.048 [2024-11-28 18:56:40.590070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.048 [2024-11-28 18:56:40.590082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.048 [2024-11-28 18:56:40.592148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.048 [2024-11-28 18:56:40.592264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.048 BaseBdev1 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 BaseBdev2_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 [2024-11-28 18:56:40.614482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:11.048 [2024-11-28 18:56:40.614531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.048 [2024-11-28 18:56:40.614548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.048 [2024-11-28 18:56:40.614557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.048 [2024-11-28 18:56:40.616526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.048 [2024-11-28 18:56:40.616563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.048 BaseBdev2 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 spare_malloc 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.048 spare_delay 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.048 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.309 [2024-11-28 18:56:40.654936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.309 [2024-11-28 18:56:40.654989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.309 [2024-11-28 18:56:40.655007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:11.309 [2024-11-28 18:56:40.655020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.309 [2024-11-28 18:56:40.657119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.309 [2024-11-28 18:56:40.657200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.309 spare 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.309 [2024-11-28 18:56:40.666998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.309 [2024-11-28 18:56:40.668809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.309 [2024-11-28 18:56:40.668961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:11.309 [2024-11-28 18:56:40.668975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:11.309 [2024-11-28 18:56:40.669237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:11.309 [2024-11-28 18:56:40.669371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:11.309 [2024-11-28 18:56:40.669380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:11.309 [2024-11-28 18:56:40.669489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.309 "name": "raid_bdev1", 00:16:11.309 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:11.309 "strip_size_kb": 0, 00:16:11.309 "state": "online", 00:16:11.309 "raid_level": "raid1", 00:16:11.309 "superblock": true, 00:16:11.309 "num_base_bdevs": 2, 00:16:11.309 "num_base_bdevs_discovered": 2, 00:16:11.309 "num_base_bdevs_operational": 2, 00:16:11.309 "base_bdevs_list": [ 00:16:11.309 { 00:16:11.309 "name": "BaseBdev1", 00:16:11.309 "uuid": "6c0cdfe5-5596-5658-8480-a99ccef46901", 00:16:11.309 "is_configured": true, 00:16:11.309 "data_offset": 256, 00:16:11.309 "data_size": 7936 00:16:11.309 }, 00:16:11.309 { 00:16:11.309 "name": "BaseBdev2", 00:16:11.309 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:11.309 "is_configured": true, 00:16:11.309 "data_offset": 256, 00:16:11.309 "data_size": 7936 00:16:11.309 } 00:16:11.309 ] 00:16:11.309 }' 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.309 18:56:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.569 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.569 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.569 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.569 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.569 [2024-11-28 18:56:41.151349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.569 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.830 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:11.831 [2024-11-28 18:56:41.403228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:11.831 /dev/nbd0 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.091 1+0 records in 00:16:12.091 1+0 records out 00:16:12.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057261 s, 7.2 MB/s 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:12.091 18:56:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:12.660 7936+0 records in 00:16:12.660 7936+0 records out 00:16:12.660 32505856 bytes (33 MB, 31 MiB) copied, 0.602458 s, 54.0 MB/s 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.660 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.920 [2024-11-28 18:56:42.306376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.920 [2024-11-28 18:56:42.319148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.920 "name": "raid_bdev1", 00:16:12.920 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:12.920 "strip_size_kb": 0, 00:16:12.920 "state": "online", 00:16:12.920 "raid_level": "raid1", 00:16:12.920 "superblock": true, 00:16:12.920 "num_base_bdevs": 2, 00:16:12.920 "num_base_bdevs_discovered": 1, 00:16:12.920 "num_base_bdevs_operational": 1, 00:16:12.920 "base_bdevs_list": [ 00:16:12.920 { 00:16:12.920 "name": null, 00:16:12.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.920 "is_configured": false, 00:16:12.920 "data_offset": 0, 00:16:12.920 "data_size": 7936 00:16:12.920 }, 00:16:12.920 { 00:16:12.920 "name": "BaseBdev2", 00:16:12.920 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:12.920 "is_configured": true, 00:16:12.920 "data_offset": 256, 00:16:12.920 "data_size": 7936 00:16:12.920 } 00:16:12.920 ] 00:16:12.920 }' 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.920 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.488 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.488 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.488 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.488 [2024-11-28 18:56:42.811270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.488 [2024-11-28 18:56:42.816292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:13.488 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.488 18:56:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.488 [2024-11-28 18:56:42.818156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.426 "name": "raid_bdev1", 00:16:14.426 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:14.426 "strip_size_kb": 0, 00:16:14.426 "state": "online", 00:16:14.426 "raid_level": "raid1", 00:16:14.426 "superblock": true, 00:16:14.426 "num_base_bdevs": 2, 00:16:14.426 "num_base_bdevs_discovered": 2, 00:16:14.426 "num_base_bdevs_operational": 2, 00:16:14.426 "process": { 00:16:14.426 "type": "rebuild", 00:16:14.426 "target": "spare", 00:16:14.426 "progress": { 00:16:14.426 "blocks": 2560, 00:16:14.426 "percent": 32 00:16:14.426 } 00:16:14.426 }, 00:16:14.426 "base_bdevs_list": [ 00:16:14.426 { 00:16:14.426 "name": "spare", 00:16:14.426 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:14.426 "is_configured": true, 00:16:14.426 "data_offset": 256, 00:16:14.426 "data_size": 7936 00:16:14.426 }, 00:16:14.426 { 00:16:14.426 "name": "BaseBdev2", 00:16:14.426 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:14.426 "is_configured": true, 00:16:14.426 "data_offset": 256, 00:16:14.426 "data_size": 7936 00:16:14.426 } 00:16:14.426 ] 00:16:14.426 }' 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.426 18:56:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.426 [2024-11-28 18:56:43.977381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.426 [2024-11-28 18:56:44.024755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.426 [2024-11-28 18:56:44.024810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.426 [2024-11-28 18:56:44.024824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.426 [2024-11-28 18:56:44.024833] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.685 "name": "raid_bdev1", 00:16:14.685 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:14.685 "strip_size_kb": 0, 00:16:14.685 "state": "online", 00:16:14.685 "raid_level": "raid1", 00:16:14.685 "superblock": true, 00:16:14.685 "num_base_bdevs": 2, 00:16:14.685 "num_base_bdevs_discovered": 1, 00:16:14.685 "num_base_bdevs_operational": 1, 00:16:14.685 "base_bdevs_list": [ 00:16:14.685 { 00:16:14.685 "name": null, 00:16:14.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.685 "is_configured": false, 00:16:14.685 "data_offset": 0, 00:16:14.685 "data_size": 7936 00:16:14.685 }, 00:16:14.685 { 00:16:14.685 "name": "BaseBdev2", 00:16:14.685 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:14.685 "is_configured": true, 00:16:14.685 "data_offset": 256, 00:16:14.685 "data_size": 7936 00:16:14.685 } 00:16:14.685 ] 00:16:14.685 }' 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.685 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.943 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.943 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.944 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.944 "name": "raid_bdev1", 00:16:14.944 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:14.944 "strip_size_kb": 0, 00:16:14.944 "state": "online", 00:16:14.944 "raid_level": "raid1", 00:16:14.944 "superblock": true, 00:16:14.944 "num_base_bdevs": 2, 00:16:14.944 "num_base_bdevs_discovered": 1, 00:16:14.944 "num_base_bdevs_operational": 1, 00:16:14.944 "base_bdevs_list": [ 00:16:14.944 { 00:16:14.944 "name": null, 00:16:14.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.944 "is_configured": false, 00:16:14.944 "data_offset": 0, 00:16:14.944 "data_size": 7936 00:16:14.944 }, 00:16:14.944 { 00:16:14.944 "name": "BaseBdev2", 00:16:14.944 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:14.944 "is_configured": true, 00:16:14.944 "data_offset": 256, 00:16:14.944 "data_size": 7936 00:16:14.944 } 00:16:14.944 ] 00:16:14.944 }' 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.202 [2024-11-28 18:56:44.649791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.202 [2024-11-28 18:56:44.654374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.202 18:56:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:15.202 [2024-11-28 18:56:44.656300] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.139 "name": "raid_bdev1", 00:16:16.139 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:16.139 "strip_size_kb": 0, 00:16:16.139 "state": "online", 00:16:16.139 "raid_level": "raid1", 00:16:16.139 "superblock": true, 00:16:16.139 "num_base_bdevs": 2, 00:16:16.139 "num_base_bdevs_discovered": 2, 00:16:16.139 "num_base_bdevs_operational": 2, 00:16:16.139 "process": { 00:16:16.139 "type": "rebuild", 00:16:16.139 "target": "spare", 00:16:16.139 "progress": { 00:16:16.139 "blocks": 2560, 00:16:16.139 "percent": 32 00:16:16.139 } 00:16:16.139 }, 00:16:16.139 "base_bdevs_list": [ 00:16:16.139 { 00:16:16.139 "name": "spare", 00:16:16.139 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:16.139 "is_configured": true, 00:16:16.139 "data_offset": 256, 00:16:16.139 "data_size": 7936 00:16:16.139 }, 00:16:16.139 { 00:16:16.139 "name": "BaseBdev2", 00:16:16.139 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:16.139 "is_configured": true, 00:16:16.139 "data_offset": 256, 00:16:16.139 "data_size": 7936 00:16:16.139 } 00:16:16.139 ] 00:16:16.139 }' 00:16:16.139 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:16.399 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=558 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.399 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.400 "name": "raid_bdev1", 00:16:16.400 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:16.400 "strip_size_kb": 0, 00:16:16.400 "state": "online", 00:16:16.400 "raid_level": "raid1", 00:16:16.400 "superblock": true, 00:16:16.400 "num_base_bdevs": 2, 00:16:16.400 "num_base_bdevs_discovered": 2, 00:16:16.400 "num_base_bdevs_operational": 2, 00:16:16.400 "process": { 00:16:16.400 "type": "rebuild", 00:16:16.400 "target": "spare", 00:16:16.400 "progress": { 00:16:16.400 "blocks": 2816, 00:16:16.400 "percent": 35 00:16:16.400 } 00:16:16.400 }, 00:16:16.400 "base_bdevs_list": [ 00:16:16.400 { 00:16:16.400 "name": "spare", 00:16:16.400 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:16.400 "is_configured": true, 00:16:16.400 "data_offset": 256, 00:16:16.400 "data_size": 7936 00:16:16.400 }, 00:16:16.400 { 00:16:16.400 "name": "BaseBdev2", 00:16:16.400 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:16.400 "is_configured": true, 00:16:16.400 "data_offset": 256, 00:16:16.400 "data_size": 7936 00:16:16.400 } 00:16:16.400 ] 00:16:16.400 }' 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.400 18:56:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.782 18:56:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.782 "name": "raid_bdev1", 00:16:17.782 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:17.782 "strip_size_kb": 0, 00:16:17.782 "state": "online", 00:16:17.782 "raid_level": "raid1", 00:16:17.782 "superblock": true, 00:16:17.782 "num_base_bdevs": 2, 00:16:17.782 "num_base_bdevs_discovered": 2, 00:16:17.782 "num_base_bdevs_operational": 2, 00:16:17.782 "process": { 00:16:17.782 "type": "rebuild", 00:16:17.782 "target": "spare", 00:16:17.782 "progress": { 00:16:17.782 "blocks": 5888, 00:16:17.782 "percent": 74 00:16:17.782 } 00:16:17.782 }, 00:16:17.782 "base_bdevs_list": [ 00:16:17.782 { 00:16:17.782 "name": "spare", 00:16:17.782 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:17.782 "is_configured": true, 00:16:17.782 "data_offset": 256, 00:16:17.782 "data_size": 7936 00:16:17.782 }, 00:16:17.782 { 00:16:17.782 "name": "BaseBdev2", 00:16:17.782 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:17.782 "is_configured": true, 00:16:17.782 "data_offset": 256, 00:16:17.782 "data_size": 7936 00:16:17.782 } 00:16:17.782 ] 00:16:17.782 }' 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.782 18:56:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.351 [2024-11-28 18:56:47.771914] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:18.351 [2024-11-28 18:56:47.771979] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:18.351 [2024-11-28 18:56:47.772078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.611 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.611 "name": "raid_bdev1", 00:16:18.611 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:18.611 "strip_size_kb": 0, 00:16:18.611 "state": "online", 00:16:18.611 "raid_level": "raid1", 00:16:18.611 "superblock": true, 00:16:18.611 "num_base_bdevs": 2, 00:16:18.611 "num_base_bdevs_discovered": 2, 00:16:18.611 "num_base_bdevs_operational": 2, 00:16:18.611 "base_bdevs_list": [ 00:16:18.611 { 00:16:18.611 "name": "spare", 00:16:18.611 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:18.611 "is_configured": true, 00:16:18.612 "data_offset": 256, 00:16:18.612 "data_size": 7936 00:16:18.612 }, 00:16:18.612 { 00:16:18.612 "name": "BaseBdev2", 00:16:18.612 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:18.612 "is_configured": true, 00:16:18.612 "data_offset": 256, 00:16:18.612 "data_size": 7936 00:16:18.612 } 00:16:18.612 ] 00:16:18.612 }' 00:16:18.612 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.872 "name": "raid_bdev1", 00:16:18.872 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:18.872 "strip_size_kb": 0, 00:16:18.872 "state": "online", 00:16:18.872 "raid_level": "raid1", 00:16:18.872 "superblock": true, 00:16:18.872 "num_base_bdevs": 2, 00:16:18.872 "num_base_bdevs_discovered": 2, 00:16:18.872 "num_base_bdevs_operational": 2, 00:16:18.872 "base_bdevs_list": [ 00:16:18.872 { 00:16:18.872 "name": "spare", 00:16:18.872 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:18.872 "is_configured": true, 00:16:18.872 "data_offset": 256, 00:16:18.872 "data_size": 7936 00:16:18.872 }, 00:16:18.872 { 00:16:18.872 "name": "BaseBdev2", 00:16:18.872 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:18.872 "is_configured": true, 00:16:18.872 "data_offset": 256, 00:16:18.872 "data_size": 7936 00:16:18.872 } 00:16:18.872 ] 00:16:18.872 }' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.872 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.132 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.132 "name": "raid_bdev1", 00:16:19.132 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:19.132 "strip_size_kb": 0, 00:16:19.132 "state": "online", 00:16:19.132 "raid_level": "raid1", 00:16:19.132 "superblock": true, 00:16:19.132 "num_base_bdevs": 2, 00:16:19.132 "num_base_bdevs_discovered": 2, 00:16:19.132 "num_base_bdevs_operational": 2, 00:16:19.132 "base_bdevs_list": [ 00:16:19.132 { 00:16:19.132 "name": "spare", 00:16:19.132 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:19.132 "is_configured": true, 00:16:19.132 "data_offset": 256, 00:16:19.132 "data_size": 7936 00:16:19.132 }, 00:16:19.132 { 00:16:19.132 "name": "BaseBdev2", 00:16:19.132 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:19.132 "is_configured": true, 00:16:19.132 "data_offset": 256, 00:16:19.132 "data_size": 7936 00:16:19.132 } 00:16:19.132 ] 00:16:19.132 }' 00:16:19.132 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.132 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.392 [2024-11-28 18:56:48.900681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.392 [2024-11-28 18:56:48.900713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.392 [2024-11-28 18:56:48.900781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.392 [2024-11-28 18:56:48.900841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.392 [2024-11-28 18:56:48.900850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.392 18:56:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:19.652 /dev/nbd0 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.652 1+0 records in 00:16:19.652 1+0 records out 00:16:19.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315188 s, 13.0 MB/s 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.652 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:19.912 /dev/nbd1 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.912 1+0 records in 00:16:19.912 1+0 records out 00:16:19.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381165 s, 10.7 MB/s 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.912 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.172 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 18:56:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 [2024-11-28 18:56:49.999871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.431 [2024-11-28 18:56:49.999926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.431 [2024-11-28 18:56:49.999949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:20.431 [2024-11-28 18:56:49.999957] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.431 [2024-11-28 18:56:50.002108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.431 [2024-11-28 18:56:50.002194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.431 [2024-11-28 18:56:50.002281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:20.431 [2024-11-28 18:56:50.002334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.431 [2024-11-28 18:56:50.002475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.431 spare 00:16:20.431 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.431 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:20.431 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.689 [2024-11-28 18:56:50.102555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:20.689 [2024-11-28 18:56:50.102583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:20.689 [2024-11-28 18:56:50.102834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:20.689 [2024-11-28 18:56:50.102976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:20.689 [2024-11-28 18:56:50.102986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:20.689 [2024-11-28 18:56:50.103098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.689 "name": "raid_bdev1", 00:16:20.689 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:20.689 "strip_size_kb": 0, 00:16:20.689 "state": "online", 00:16:20.689 "raid_level": "raid1", 00:16:20.689 "superblock": true, 00:16:20.689 "num_base_bdevs": 2, 00:16:20.689 "num_base_bdevs_discovered": 2, 00:16:20.689 "num_base_bdevs_operational": 2, 00:16:20.689 "base_bdevs_list": [ 00:16:20.689 { 00:16:20.689 "name": "spare", 00:16:20.689 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:20.689 "is_configured": true, 00:16:20.689 "data_offset": 256, 00:16:20.689 "data_size": 7936 00:16:20.689 }, 00:16:20.689 { 00:16:20.689 "name": "BaseBdev2", 00:16:20.689 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:20.689 "is_configured": true, 00:16:20.689 "data_offset": 256, 00:16:20.689 "data_size": 7936 00:16:20.689 } 00:16:20.689 ] 00:16:20.689 }' 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.689 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.258 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.258 "name": "raid_bdev1", 00:16:21.258 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:21.258 "strip_size_kb": 0, 00:16:21.258 "state": "online", 00:16:21.258 "raid_level": "raid1", 00:16:21.258 "superblock": true, 00:16:21.258 "num_base_bdevs": 2, 00:16:21.258 "num_base_bdevs_discovered": 2, 00:16:21.258 "num_base_bdevs_operational": 2, 00:16:21.258 "base_bdevs_list": [ 00:16:21.258 { 00:16:21.258 "name": "spare", 00:16:21.258 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:21.258 "is_configured": true, 00:16:21.258 "data_offset": 256, 00:16:21.258 "data_size": 7936 00:16:21.258 }, 00:16:21.259 { 00:16:21.259 "name": "BaseBdev2", 00:16:21.259 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:21.259 "is_configured": true, 00:16:21.259 "data_offset": 256, 00:16:21.259 "data_size": 7936 00:16:21.259 } 00:16:21.259 ] 00:16:21.259 }' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.259 [2024-11-28 18:56:50.744089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.259 "name": "raid_bdev1", 00:16:21.259 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:21.259 "strip_size_kb": 0, 00:16:21.259 "state": "online", 00:16:21.259 "raid_level": "raid1", 00:16:21.259 "superblock": true, 00:16:21.259 "num_base_bdevs": 2, 00:16:21.259 "num_base_bdevs_discovered": 1, 00:16:21.259 "num_base_bdevs_operational": 1, 00:16:21.259 "base_bdevs_list": [ 00:16:21.259 { 00:16:21.259 "name": null, 00:16:21.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.259 "is_configured": false, 00:16:21.259 "data_offset": 0, 00:16:21.259 "data_size": 7936 00:16:21.259 }, 00:16:21.259 { 00:16:21.259 "name": "BaseBdev2", 00:16:21.259 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:21.259 "is_configured": true, 00:16:21.259 "data_offset": 256, 00:16:21.259 "data_size": 7936 00:16:21.259 } 00:16:21.259 ] 00:16:21.259 }' 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.259 18:56:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.828 18:56:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.828 18:56:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.828 18:56:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.828 [2024-11-28 18:56:51.204240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.829 [2024-11-28 18:56:51.204490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.829 [2024-11-28 18:56:51.204557] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.829 [2024-11-28 18:56:51.204665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.829 [2024-11-28 18:56:51.209367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:21.829 18:56:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.829 18:56:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:21.829 [2024-11-28 18:56:51.211280] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.769 "name": "raid_bdev1", 00:16:22.769 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:22.769 "strip_size_kb": 0, 00:16:22.769 "state": "online", 00:16:22.769 "raid_level": "raid1", 00:16:22.769 "superblock": true, 00:16:22.769 "num_base_bdevs": 2, 00:16:22.769 "num_base_bdevs_discovered": 2, 00:16:22.769 "num_base_bdevs_operational": 2, 00:16:22.769 "process": { 00:16:22.769 "type": "rebuild", 00:16:22.769 "target": "spare", 00:16:22.769 "progress": { 00:16:22.769 "blocks": 2560, 00:16:22.769 "percent": 32 00:16:22.769 } 00:16:22.769 }, 00:16:22.769 "base_bdevs_list": [ 00:16:22.769 { 00:16:22.769 "name": "spare", 00:16:22.769 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:22.769 "is_configured": true, 00:16:22.769 "data_offset": 256, 00:16:22.769 "data_size": 7936 00:16:22.769 }, 00:16:22.769 { 00:16:22.769 "name": "BaseBdev2", 00:16:22.769 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:22.769 "is_configured": true, 00:16:22.769 "data_offset": 256, 00:16:22.769 "data_size": 7936 00:16:22.769 } 00:16:22.769 ] 00:16:22.769 }' 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.769 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.029 [2024-11-28 18:56:52.375926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.029 [2024-11-28 18:56:52.417406] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.029 [2024-11-28 18:56:52.417474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.029 [2024-11-28 18:56:52.417488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.029 [2024-11-28 18:56:52.417498] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.029 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.030 "name": "raid_bdev1", 00:16:23.030 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:23.030 "strip_size_kb": 0, 00:16:23.030 "state": "online", 00:16:23.030 "raid_level": "raid1", 00:16:23.030 "superblock": true, 00:16:23.030 "num_base_bdevs": 2, 00:16:23.030 "num_base_bdevs_discovered": 1, 00:16:23.030 "num_base_bdevs_operational": 1, 00:16:23.030 "base_bdevs_list": [ 00:16:23.030 { 00:16:23.030 "name": null, 00:16:23.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.030 "is_configured": false, 00:16:23.030 "data_offset": 0, 00:16:23.030 "data_size": 7936 00:16:23.030 }, 00:16:23.030 { 00:16:23.030 "name": "BaseBdev2", 00:16:23.030 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:23.030 "is_configured": true, 00:16:23.030 "data_offset": 256, 00:16:23.030 "data_size": 7936 00:16:23.030 } 00:16:23.030 ] 00:16:23.030 }' 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.030 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.289 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.289 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.289 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.549 [2024-11-28 18:56:52.897938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.549 [2024-11-28 18:56:52.898000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.549 [2024-11-28 18:56:52.898019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:23.549 [2024-11-28 18:56:52.898030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.549 [2024-11-28 18:56:52.898466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.549 [2024-11-28 18:56:52.898488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.549 [2024-11-28 18:56:52.898544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:23.549 [2024-11-28 18:56:52.898559] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.549 [2024-11-28 18:56:52.898567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:23.549 [2024-11-28 18:56:52.898589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.549 [2024-11-28 18:56:52.903063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:23.549 spare 00:16:23.549 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.549 18:56:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:23.549 [2024-11-28 18:56:52.904988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.488 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.488 "name": "raid_bdev1", 00:16:24.489 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:24.489 "strip_size_kb": 0, 00:16:24.489 "state": "online", 00:16:24.489 "raid_level": "raid1", 00:16:24.489 "superblock": true, 00:16:24.489 "num_base_bdevs": 2, 00:16:24.489 "num_base_bdevs_discovered": 2, 00:16:24.489 "num_base_bdevs_operational": 2, 00:16:24.489 "process": { 00:16:24.489 "type": "rebuild", 00:16:24.489 "target": "spare", 00:16:24.489 "progress": { 00:16:24.489 "blocks": 2560, 00:16:24.489 "percent": 32 00:16:24.489 } 00:16:24.489 }, 00:16:24.489 "base_bdevs_list": [ 00:16:24.489 { 00:16:24.489 "name": "spare", 00:16:24.489 "uuid": "93ccb4f6-635d-52e5-9a7d-59b52bddbcf2", 00:16:24.489 "is_configured": true, 00:16:24.489 "data_offset": 256, 00:16:24.489 "data_size": 7936 00:16:24.489 }, 00:16:24.489 { 00:16:24.489 "name": "BaseBdev2", 00:16:24.489 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:24.489 "is_configured": true, 00:16:24.489 "data_offset": 256, 00:16:24.489 "data_size": 7936 00:16:24.489 } 00:16:24.489 ] 00:16:24.489 }' 00:16:24.489 18:56:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.489 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.489 [2024-11-28 18:56:54.043601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.753 [2024-11-28 18:56:54.111029] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:24.753 [2024-11-28 18:56:54.111123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.753 [2024-11-28 18:56:54.111142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.753 [2024-11-28 18:56:54.111149] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.753 "name": "raid_bdev1", 00:16:24.753 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:24.753 "strip_size_kb": 0, 00:16:24.753 "state": "online", 00:16:24.753 "raid_level": "raid1", 00:16:24.753 "superblock": true, 00:16:24.753 "num_base_bdevs": 2, 00:16:24.753 "num_base_bdevs_discovered": 1, 00:16:24.753 "num_base_bdevs_operational": 1, 00:16:24.753 "base_bdevs_list": [ 00:16:24.753 { 00:16:24.753 "name": null, 00:16:24.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.753 "is_configured": false, 00:16:24.753 "data_offset": 0, 00:16:24.753 "data_size": 7936 00:16:24.753 }, 00:16:24.753 { 00:16:24.753 "name": "BaseBdev2", 00:16:24.753 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:24.753 "is_configured": true, 00:16:24.753 "data_offset": 256, 00:16:24.753 "data_size": 7936 00:16:24.753 } 00:16:24.753 ] 00:16:24.753 }' 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.753 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.050 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.317 "name": "raid_bdev1", 00:16:25.317 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:25.317 "strip_size_kb": 0, 00:16:25.317 "state": "online", 00:16:25.317 "raid_level": "raid1", 00:16:25.317 "superblock": true, 00:16:25.317 "num_base_bdevs": 2, 00:16:25.317 "num_base_bdevs_discovered": 1, 00:16:25.317 "num_base_bdevs_operational": 1, 00:16:25.317 "base_bdevs_list": [ 00:16:25.317 { 00:16:25.317 "name": null, 00:16:25.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.317 "is_configured": false, 00:16:25.317 "data_offset": 0, 00:16:25.317 "data_size": 7936 00:16:25.317 }, 00:16:25.317 { 00:16:25.317 "name": "BaseBdev2", 00:16:25.317 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:25.317 "is_configured": true, 00:16:25.317 "data_offset": 256, 00:16:25.317 "data_size": 7936 00:16:25.317 } 00:16:25.317 ] 00:16:25.317 }' 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.317 [2024-11-28 18:56:54.767586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.317 [2024-11-28 18:56:54.767704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.317 [2024-11-28 18:56:54.767728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:25.317 [2024-11-28 18:56:54.767737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.317 [2024-11-28 18:56:54.768142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.317 [2024-11-28 18:56:54.768161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.317 [2024-11-28 18:56:54.768232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:25.317 [2024-11-28 18:56:54.768253] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.317 [2024-11-28 18:56:54.768264] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:25.317 [2024-11-28 18:56:54.768273] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:25.317 BaseBdev1 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.317 18:56:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.265 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.266 "name": "raid_bdev1", 00:16:26.266 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:26.266 "strip_size_kb": 0, 00:16:26.266 "state": "online", 00:16:26.266 "raid_level": "raid1", 00:16:26.266 "superblock": true, 00:16:26.266 "num_base_bdevs": 2, 00:16:26.266 "num_base_bdevs_discovered": 1, 00:16:26.266 "num_base_bdevs_operational": 1, 00:16:26.266 "base_bdevs_list": [ 00:16:26.266 { 00:16:26.266 "name": null, 00:16:26.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.266 "is_configured": false, 00:16:26.266 "data_offset": 0, 00:16:26.266 "data_size": 7936 00:16:26.266 }, 00:16:26.266 { 00:16:26.266 "name": "BaseBdev2", 00:16:26.266 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:26.266 "is_configured": true, 00:16:26.266 "data_offset": 256, 00:16:26.266 "data_size": 7936 00:16:26.266 } 00:16:26.266 ] 00:16:26.266 }' 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.266 18:56:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.835 "name": "raid_bdev1", 00:16:26.835 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:26.835 "strip_size_kb": 0, 00:16:26.835 "state": "online", 00:16:26.835 "raid_level": "raid1", 00:16:26.835 "superblock": true, 00:16:26.835 "num_base_bdevs": 2, 00:16:26.835 "num_base_bdevs_discovered": 1, 00:16:26.835 "num_base_bdevs_operational": 1, 00:16:26.835 "base_bdevs_list": [ 00:16:26.835 { 00:16:26.835 "name": null, 00:16:26.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.835 "is_configured": false, 00:16:26.835 "data_offset": 0, 00:16:26.835 "data_size": 7936 00:16:26.835 }, 00:16:26.835 { 00:16:26.835 "name": "BaseBdev2", 00:16:26.835 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:26.835 "is_configured": true, 00:16:26.835 "data_offset": 256, 00:16:26.835 "data_size": 7936 00:16:26.835 } 00:16:26.835 ] 00:16:26.835 }' 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.835 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 [2024-11-28 18:56:56.420088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.835 [2024-11-28 18:56:56.420226] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.835 [2024-11-28 18:56:56.420241] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.835 request: 00:16:26.835 { 00:16:26.835 "base_bdev": "BaseBdev1", 00:16:26.835 "raid_bdev": "raid_bdev1", 00:16:26.835 "method": "bdev_raid_add_base_bdev", 00:16:26.835 "req_id": 1 00:16:26.835 } 00:16:26.835 Got JSON-RPC error response 00:16:26.835 response: 00:16:26.835 { 00:16:26.835 "code": -22, 00:16:26.835 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:26.835 } 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:26.836 18:56:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.217 "name": "raid_bdev1", 00:16:28.217 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:28.217 "strip_size_kb": 0, 00:16:28.217 "state": "online", 00:16:28.217 "raid_level": "raid1", 00:16:28.217 "superblock": true, 00:16:28.217 "num_base_bdevs": 2, 00:16:28.217 "num_base_bdevs_discovered": 1, 00:16:28.217 "num_base_bdevs_operational": 1, 00:16:28.217 "base_bdevs_list": [ 00:16:28.217 { 00:16:28.217 "name": null, 00:16:28.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.217 "is_configured": false, 00:16:28.217 "data_offset": 0, 00:16:28.217 "data_size": 7936 00:16:28.217 }, 00:16:28.217 { 00:16:28.217 "name": "BaseBdev2", 00:16:28.217 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:28.217 "is_configured": true, 00:16:28.217 "data_offset": 256, 00:16:28.217 "data_size": 7936 00:16:28.217 } 00:16:28.217 ] 00:16:28.217 }' 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.217 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.478 "name": "raid_bdev1", 00:16:28.478 "uuid": "4e6a1bde-3bc4-41e3-a84e-f678b8afeea7", 00:16:28.478 "strip_size_kb": 0, 00:16:28.478 "state": "online", 00:16:28.478 "raid_level": "raid1", 00:16:28.478 "superblock": true, 00:16:28.478 "num_base_bdevs": 2, 00:16:28.478 "num_base_bdevs_discovered": 1, 00:16:28.478 "num_base_bdevs_operational": 1, 00:16:28.478 "base_bdevs_list": [ 00:16:28.478 { 00:16:28.478 "name": null, 00:16:28.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.478 "is_configured": false, 00:16:28.478 "data_offset": 0, 00:16:28.478 "data_size": 7936 00:16:28.478 }, 00:16:28.478 { 00:16:28.478 "name": "BaseBdev2", 00:16:28.478 "uuid": "19fe21fe-5d41-5740-946a-f710e0faf3a3", 00:16:28.478 "is_configured": true, 00:16:28.478 "data_offset": 256, 00:16:28.478 "data_size": 7936 00:16:28.478 } 00:16:28.478 ] 00:16:28.478 }' 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.478 18:56:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98361 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98361 ']' 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98361 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98361 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98361' 00:16:28.478 killing process with pid 98361 00:16:28.478 Received shutdown signal, test time was about 60.000000 seconds 00:16:28.478 00:16:28.478 Latency(us) 00:16:28.478 [2024-11-28T18:56:58.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.478 [2024-11-28T18:56:58.084Z] =================================================================================================================== 00:16:28.478 [2024-11-28T18:56:58.084Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98361 00:16:28.478 [2024-11-28 18:56:58.052634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.478 [2024-11-28 18:56:58.052772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.478 [2024-11-28 18:56:58.052820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.478 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98361 00:16:28.478 [2024-11-28 18:56:58.052831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:28.738 [2024-11-28 18:56:58.083975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.738 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:28.738 00:16:28.738 real 0m18.625s 00:16:28.738 user 0m24.841s 00:16:28.738 sys 0m2.724s 00:16:28.738 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.738 18:56:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:28.738 ************************************ 00:16:28.738 END TEST raid_rebuild_test_sb_4k 00:16:28.738 ************************************ 00:16:28.999 18:56:58 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:28.999 18:56:58 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:28.999 18:56:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:28.999 18:56:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.999 18:56:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 ************************************ 00:16:28.999 START TEST raid_state_function_test_sb_md_separate 00:16:28.999 ************************************ 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99040 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99040' 00:16:28.999 Process raid pid: 99040 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99040 00:16:28.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99040 ']' 00:16:28.999 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.000 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.000 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.000 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.000 18:56:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.000 [2024-11-28 18:56:58.491656] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:29.000 [2024-11-28 18:56:58.491838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.260 [2024-11-28 18:56:58.634140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:29.260 [2024-11-28 18:56:58.673386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.260 [2024-11-28 18:56:58.700776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.260 [2024-11-28 18:56:58.744690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.260 [2024-11-28 18:56:58.744734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.829 [2024-11-28 18:56:59.312745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.829 [2024-11-28 18:56:59.312801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.829 [2024-11-28 18:56:59.312813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.829 [2024-11-28 18:56:59.312820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.829 "name": "Existed_Raid", 00:16:29.829 "uuid": "60f74168-b471-4295-aa17-85b0fe2906ee", 00:16:29.829 "strip_size_kb": 0, 00:16:29.829 "state": "configuring", 00:16:29.829 "raid_level": "raid1", 00:16:29.829 "superblock": true, 00:16:29.829 "num_base_bdevs": 2, 00:16:29.829 "num_base_bdevs_discovered": 0, 00:16:29.829 "num_base_bdevs_operational": 2, 00:16:29.829 "base_bdevs_list": [ 00:16:29.829 { 00:16:29.829 "name": "BaseBdev1", 00:16:29.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.829 "is_configured": false, 00:16:29.829 "data_offset": 0, 00:16:29.829 "data_size": 0 00:16:29.829 }, 00:16:29.829 { 00:16:29.829 "name": "BaseBdev2", 00:16:29.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.829 "is_configured": false, 00:16:29.829 "data_offset": 0, 00:16:29.829 "data_size": 0 00:16:29.829 } 00:16:29.829 ] 00:16:29.829 }' 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.829 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 [2024-11-28 18:56:59.780758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.399 [2024-11-28 18:56:59.780864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 [2024-11-28 18:56:59.792807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.399 [2024-11-28 18:56:59.792882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.399 [2024-11-28 18:56:59.792926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.399 [2024-11-28 18:56:59.792946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 [2024-11-28 18:56:59.814288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.399 BaseBdev1 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.399 [ 00:16:30.399 { 00:16:30.399 "name": "BaseBdev1", 00:16:30.399 "aliases": [ 00:16:30.399 "ad5229d7-4b57-4ac3-8fec-137d09b74b1e" 00:16:30.399 ], 00:16:30.399 "product_name": "Malloc disk", 00:16:30.399 "block_size": 4096, 00:16:30.399 "num_blocks": 8192, 00:16:30.399 "uuid": "ad5229d7-4b57-4ac3-8fec-137d09b74b1e", 00:16:30.399 "md_size": 32, 00:16:30.399 "md_interleave": false, 00:16:30.399 "dif_type": 0, 00:16:30.399 "assigned_rate_limits": { 00:16:30.399 "rw_ios_per_sec": 0, 00:16:30.399 "rw_mbytes_per_sec": 0, 00:16:30.399 "r_mbytes_per_sec": 0, 00:16:30.399 "w_mbytes_per_sec": 0 00:16:30.399 }, 00:16:30.399 "claimed": true, 00:16:30.399 "claim_type": "exclusive_write", 00:16:30.399 "zoned": false, 00:16:30.399 "supported_io_types": { 00:16:30.399 "read": true, 00:16:30.399 "write": true, 00:16:30.399 "unmap": true, 00:16:30.399 "flush": true, 00:16:30.399 "reset": true, 00:16:30.399 "nvme_admin": false, 00:16:30.399 "nvme_io": false, 00:16:30.399 "nvme_io_md": false, 00:16:30.399 "write_zeroes": true, 00:16:30.399 "zcopy": true, 00:16:30.399 "get_zone_info": false, 00:16:30.399 "zone_management": false, 00:16:30.399 "zone_append": false, 00:16:30.399 "compare": false, 00:16:30.399 "compare_and_write": false, 00:16:30.399 "abort": true, 00:16:30.399 "seek_hole": false, 00:16:30.399 "seek_data": false, 00:16:30.399 "copy": true, 00:16:30.399 "nvme_iov_md": false 00:16:30.399 }, 00:16:30.399 "memory_domains": [ 00:16:30.399 { 00:16:30.399 "dma_device_id": "system", 00:16:30.399 "dma_device_type": 1 00:16:30.399 }, 00:16:30.399 { 00:16:30.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.399 "dma_device_type": 2 00:16:30.399 } 00:16:30.399 ], 00:16:30.399 "driver_specific": {} 00:16:30.399 } 00:16:30.399 ] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:30.399 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.400 "name": "Existed_Raid", 00:16:30.400 "uuid": "87db00ac-6f66-4d12-b476-87840b622c28", 00:16:30.400 "strip_size_kb": 0, 00:16:30.400 "state": "configuring", 00:16:30.400 "raid_level": "raid1", 00:16:30.400 "superblock": true, 00:16:30.400 "num_base_bdevs": 2, 00:16:30.400 "num_base_bdevs_discovered": 1, 00:16:30.400 "num_base_bdevs_operational": 2, 00:16:30.400 "base_bdevs_list": [ 00:16:30.400 { 00:16:30.400 "name": "BaseBdev1", 00:16:30.400 "uuid": "ad5229d7-4b57-4ac3-8fec-137d09b74b1e", 00:16:30.400 "is_configured": true, 00:16:30.400 "data_offset": 256, 00:16:30.400 "data_size": 7936 00:16:30.400 }, 00:16:30.400 { 00:16:30.400 "name": "BaseBdev2", 00:16:30.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.400 "is_configured": false, 00:16:30.400 "data_offset": 0, 00:16:30.400 "data_size": 0 00:16:30.400 } 00:16:30.400 ] 00:16:30.400 }' 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.400 18:56:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.971 [2024-11-28 18:57:00.298474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.971 [2024-11-28 18:57:00.298517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.971 [2024-11-28 18:57:00.310552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.971 [2024-11-28 18:57:00.312398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.971 [2024-11-28 18:57:00.312449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.971 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.972 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.972 "name": "Existed_Raid", 00:16:30.972 "uuid": "26903a72-421e-48e4-af26-6e1656d4685c", 00:16:30.972 "strip_size_kb": 0, 00:16:30.972 "state": "configuring", 00:16:30.972 "raid_level": "raid1", 00:16:30.972 "superblock": true, 00:16:30.972 "num_base_bdevs": 2, 00:16:30.972 "num_base_bdevs_discovered": 1, 00:16:30.972 "num_base_bdevs_operational": 2, 00:16:30.972 "base_bdevs_list": [ 00:16:30.972 { 00:16:30.972 "name": "BaseBdev1", 00:16:30.972 "uuid": "ad5229d7-4b57-4ac3-8fec-137d09b74b1e", 00:16:30.972 "is_configured": true, 00:16:30.972 "data_offset": 256, 00:16:30.972 "data_size": 7936 00:16:30.972 }, 00:16:30.972 { 00:16:30.972 "name": "BaseBdev2", 00:16:30.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.972 "is_configured": false, 00:16:30.972 "data_offset": 0, 00:16:30.972 "data_size": 0 00:16:30.972 } 00:16:30.972 ] 00:16:30.972 }' 00:16:30.972 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.972 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.232 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:31.232 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.233 [2024-11-28 18:57:00.730265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.233 [2024-11-28 18:57:00.730520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:31.233 [2024-11-28 18:57:00.730580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.233 [2024-11-28 18:57:00.730721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:31.233 [2024-11-28 18:57:00.730887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:31.233 BaseBdev2 00:16:31.233 [2024-11-28 18:57:00.730930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:31.233 [2024-11-28 18:57:00.731018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.233 [ 00:16:31.233 { 00:16:31.233 "name": "BaseBdev2", 00:16:31.233 "aliases": [ 00:16:31.233 "7d748a23-4a27-47f3-9e24-99f2afbb3964" 00:16:31.233 ], 00:16:31.233 "product_name": "Malloc disk", 00:16:31.233 "block_size": 4096, 00:16:31.233 "num_blocks": 8192, 00:16:31.233 "uuid": "7d748a23-4a27-47f3-9e24-99f2afbb3964", 00:16:31.233 "md_size": 32, 00:16:31.233 "md_interleave": false, 00:16:31.233 "dif_type": 0, 00:16:31.233 "assigned_rate_limits": { 00:16:31.233 "rw_ios_per_sec": 0, 00:16:31.233 "rw_mbytes_per_sec": 0, 00:16:31.233 "r_mbytes_per_sec": 0, 00:16:31.233 "w_mbytes_per_sec": 0 00:16:31.233 }, 00:16:31.233 "claimed": true, 00:16:31.233 "claim_type": "exclusive_write", 00:16:31.233 "zoned": false, 00:16:31.233 "supported_io_types": { 00:16:31.233 "read": true, 00:16:31.233 "write": true, 00:16:31.233 "unmap": true, 00:16:31.233 "flush": true, 00:16:31.233 "reset": true, 00:16:31.233 "nvme_admin": false, 00:16:31.233 "nvme_io": false, 00:16:31.233 "nvme_io_md": false, 00:16:31.233 "write_zeroes": true, 00:16:31.233 "zcopy": true, 00:16:31.233 "get_zone_info": false, 00:16:31.233 "zone_management": false, 00:16:31.233 "zone_append": false, 00:16:31.233 "compare": false, 00:16:31.233 "compare_and_write": false, 00:16:31.233 "abort": true, 00:16:31.233 "seek_hole": false, 00:16:31.233 "seek_data": false, 00:16:31.233 "copy": true, 00:16:31.233 "nvme_iov_md": false 00:16:31.233 }, 00:16:31.233 "memory_domains": [ 00:16:31.233 { 00:16:31.233 "dma_device_id": "system", 00:16:31.233 "dma_device_type": 1 00:16:31.233 }, 00:16:31.233 { 00:16:31.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.233 "dma_device_type": 2 00:16:31.233 } 00:16:31.233 ], 00:16:31.233 "driver_specific": {} 00:16:31.233 } 00:16:31.233 ] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.233 "name": "Existed_Raid", 00:16:31.233 "uuid": "26903a72-421e-48e4-af26-6e1656d4685c", 00:16:31.233 "strip_size_kb": 0, 00:16:31.233 "state": "online", 00:16:31.233 "raid_level": "raid1", 00:16:31.233 "superblock": true, 00:16:31.233 "num_base_bdevs": 2, 00:16:31.233 "num_base_bdevs_discovered": 2, 00:16:31.233 "num_base_bdevs_operational": 2, 00:16:31.233 "base_bdevs_list": [ 00:16:31.233 { 00:16:31.233 "name": "BaseBdev1", 00:16:31.233 "uuid": "ad5229d7-4b57-4ac3-8fec-137d09b74b1e", 00:16:31.233 "is_configured": true, 00:16:31.233 "data_offset": 256, 00:16:31.233 "data_size": 7936 00:16:31.233 }, 00:16:31.233 { 00:16:31.233 "name": "BaseBdev2", 00:16:31.233 "uuid": "7d748a23-4a27-47f3-9e24-99f2afbb3964", 00:16:31.233 "is_configured": true, 00:16:31.233 "data_offset": 256, 00:16:31.233 "data_size": 7936 00:16:31.233 } 00:16:31.233 ] 00:16:31.233 }' 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.233 18:57:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:31.803 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.804 [2024-11-28 18:57:01.226704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.804 "name": "Existed_Raid", 00:16:31.804 "aliases": [ 00:16:31.804 "26903a72-421e-48e4-af26-6e1656d4685c" 00:16:31.804 ], 00:16:31.804 "product_name": "Raid Volume", 00:16:31.804 "block_size": 4096, 00:16:31.804 "num_blocks": 7936, 00:16:31.804 "uuid": "26903a72-421e-48e4-af26-6e1656d4685c", 00:16:31.804 "md_size": 32, 00:16:31.804 "md_interleave": false, 00:16:31.804 "dif_type": 0, 00:16:31.804 "assigned_rate_limits": { 00:16:31.804 "rw_ios_per_sec": 0, 00:16:31.804 "rw_mbytes_per_sec": 0, 00:16:31.804 "r_mbytes_per_sec": 0, 00:16:31.804 "w_mbytes_per_sec": 0 00:16:31.804 }, 00:16:31.804 "claimed": false, 00:16:31.804 "zoned": false, 00:16:31.804 "supported_io_types": { 00:16:31.804 "read": true, 00:16:31.804 "write": true, 00:16:31.804 "unmap": false, 00:16:31.804 "flush": false, 00:16:31.804 "reset": true, 00:16:31.804 "nvme_admin": false, 00:16:31.804 "nvme_io": false, 00:16:31.804 "nvme_io_md": false, 00:16:31.804 "write_zeroes": true, 00:16:31.804 "zcopy": false, 00:16:31.804 "get_zone_info": false, 00:16:31.804 "zone_management": false, 00:16:31.804 "zone_append": false, 00:16:31.804 "compare": false, 00:16:31.804 "compare_and_write": false, 00:16:31.804 "abort": false, 00:16:31.804 "seek_hole": false, 00:16:31.804 "seek_data": false, 00:16:31.804 "copy": false, 00:16:31.804 "nvme_iov_md": false 00:16:31.804 }, 00:16:31.804 "memory_domains": [ 00:16:31.804 { 00:16:31.804 "dma_device_id": "system", 00:16:31.804 "dma_device_type": 1 00:16:31.804 }, 00:16:31.804 { 00:16:31.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.804 "dma_device_type": 2 00:16:31.804 }, 00:16:31.804 { 00:16:31.804 "dma_device_id": "system", 00:16:31.804 "dma_device_type": 1 00:16:31.804 }, 00:16:31.804 { 00:16:31.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.804 "dma_device_type": 2 00:16:31.804 } 00:16:31.804 ], 00:16:31.804 "driver_specific": { 00:16:31.804 "raid": { 00:16:31.804 "uuid": "26903a72-421e-48e4-af26-6e1656d4685c", 00:16:31.804 "strip_size_kb": 0, 00:16:31.804 "state": "online", 00:16:31.804 "raid_level": "raid1", 00:16:31.804 "superblock": true, 00:16:31.804 "num_base_bdevs": 2, 00:16:31.804 "num_base_bdevs_discovered": 2, 00:16:31.804 "num_base_bdevs_operational": 2, 00:16:31.804 "base_bdevs_list": [ 00:16:31.804 { 00:16:31.804 "name": "BaseBdev1", 00:16:31.804 "uuid": "ad5229d7-4b57-4ac3-8fec-137d09b74b1e", 00:16:31.804 "is_configured": true, 00:16:31.804 "data_offset": 256, 00:16:31.804 "data_size": 7936 00:16:31.804 }, 00:16:31.804 { 00:16:31.804 "name": "BaseBdev2", 00:16:31.804 "uuid": "7d748a23-4a27-47f3-9e24-99f2afbb3964", 00:16:31.804 "is_configured": true, 00:16:31.804 "data_offset": 256, 00:16:31.804 "data_size": 7936 00:16:31.804 } 00:16:31.804 ] 00:16:31.804 } 00:16:31.804 } 00:16:31.804 }' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:31.804 BaseBdev2' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:31.804 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:32.065 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 [2024-11-28 18:57:01.458557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.066 "name": "Existed_Raid", 00:16:32.066 "uuid": "26903a72-421e-48e4-af26-6e1656d4685c", 00:16:32.066 "strip_size_kb": 0, 00:16:32.066 "state": "online", 00:16:32.066 "raid_level": "raid1", 00:16:32.066 "superblock": true, 00:16:32.066 "num_base_bdevs": 2, 00:16:32.066 "num_base_bdevs_discovered": 1, 00:16:32.066 "num_base_bdevs_operational": 1, 00:16:32.066 "base_bdevs_list": [ 00:16:32.066 { 00:16:32.066 "name": null, 00:16:32.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.066 "is_configured": false, 00:16:32.066 "data_offset": 0, 00:16:32.066 "data_size": 7936 00:16:32.066 }, 00:16:32.066 { 00:16:32.066 "name": "BaseBdev2", 00:16:32.066 "uuid": "7d748a23-4a27-47f3-9e24-99f2afbb3964", 00:16:32.066 "is_configured": true, 00:16:32.066 "data_offset": 256, 00:16:32.066 "data_size": 7936 00:16:32.066 } 00:16:32.066 ] 00:16:32.066 }' 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.066 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.327 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 [2024-11-28 18:57:01.926798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.327 [2024-11-28 18:57:01.926897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.587 [2024-11-28 18:57:01.939189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.588 [2024-11-28 18:57:01.939321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.588 [2024-11-28 18:57:01.939335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99040 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99040 ']' 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99040 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:32.588 18:57:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99040 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99040' 00:16:32.588 killing process with pid 99040 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99040 00:16:32.588 [2024-11-28 18:57:02.038948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.588 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99040 00:16:32.588 [2024-11-28 18:57:02.039926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.849 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:32.849 00:16:32.849 real 0m3.881s 00:16:32.849 user 0m6.048s 00:16:32.849 sys 0m0.897s 00:16:32.849 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.849 ************************************ 00:16:32.849 END TEST raid_state_function_test_sb_md_separate 00:16:32.849 ************************************ 00:16:32.849 18:57:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.849 18:57:02 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:32.849 18:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:32.849 18:57:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.849 18:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.849 ************************************ 00:16:32.849 START TEST raid_superblock_test_md_separate 00:16:32.849 ************************************ 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99276 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99276 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99276 ']' 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.849 18:57:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.849 [2024-11-28 18:57:02.447074] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:32.849 [2024-11-28 18:57:02.447277] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99276 ] 00:16:33.109 [2024-11-28 18:57:02.588066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:33.109 [2024-11-28 18:57:02.628609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.109 [2024-11-28 18:57:02.655289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.109 [2024-11-28 18:57:02.698588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.109 [2024-11-28 18:57:02.698700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.678 malloc1 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.678 [2024-11-28 18:57:03.268292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.678 [2024-11-28 18:57:03.268354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.678 [2024-11-28 18:57:03.268397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.678 [2024-11-28 18:57:03.268407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.678 [2024-11-28 18:57:03.270297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.678 [2024-11-28 18:57:03.270334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.678 pt1 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.678 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.938 malloc2 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.938 [2024-11-28 18:57:03.297626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.938 [2024-11-28 18:57:03.297773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.938 [2024-11-28 18:57:03.297809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.938 [2024-11-28 18:57:03.297836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.938 [2024-11-28 18:57:03.299678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.938 [2024-11-28 18:57:03.299760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.938 pt2 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.938 [2024-11-28 18:57:03.309650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.938 [2024-11-28 18:57:03.311525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.938 [2024-11-28 18:57:03.311741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:33.938 [2024-11-28 18:57:03.311789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:33.938 [2024-11-28 18:57:03.311890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:33.938 [2024-11-28 18:57:03.312042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:33.938 [2024-11-28 18:57:03.312086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:33.938 [2024-11-28 18:57:03.312219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.938 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.938 "name": "raid_bdev1", 00:16:33.938 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:33.938 "strip_size_kb": 0, 00:16:33.938 "state": "online", 00:16:33.938 "raid_level": "raid1", 00:16:33.938 "superblock": true, 00:16:33.938 "num_base_bdevs": 2, 00:16:33.939 "num_base_bdevs_discovered": 2, 00:16:33.939 "num_base_bdevs_operational": 2, 00:16:33.939 "base_bdevs_list": [ 00:16:33.939 { 00:16:33.939 "name": "pt1", 00:16:33.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.939 "is_configured": true, 00:16:33.939 "data_offset": 256, 00:16:33.939 "data_size": 7936 00:16:33.939 }, 00:16:33.939 { 00:16:33.939 "name": "pt2", 00:16:33.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.939 "is_configured": true, 00:16:33.939 "data_offset": 256, 00:16:33.939 "data_size": 7936 00:16:33.939 } 00:16:33.939 ] 00:16:33.939 }' 00:16:33.939 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.939 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.198 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:34.198 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:34.198 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.198 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.198 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.199 [2024-11-28 18:57:03.778025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.199 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.458 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.458 "name": "raid_bdev1", 00:16:34.458 "aliases": [ 00:16:34.458 "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97" 00:16:34.458 ], 00:16:34.458 "product_name": "Raid Volume", 00:16:34.458 "block_size": 4096, 00:16:34.458 "num_blocks": 7936, 00:16:34.458 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:34.458 "md_size": 32, 00:16:34.458 "md_interleave": false, 00:16:34.458 "dif_type": 0, 00:16:34.458 "assigned_rate_limits": { 00:16:34.458 "rw_ios_per_sec": 0, 00:16:34.458 "rw_mbytes_per_sec": 0, 00:16:34.458 "r_mbytes_per_sec": 0, 00:16:34.458 "w_mbytes_per_sec": 0 00:16:34.458 }, 00:16:34.458 "claimed": false, 00:16:34.458 "zoned": false, 00:16:34.458 "supported_io_types": { 00:16:34.458 "read": true, 00:16:34.458 "write": true, 00:16:34.458 "unmap": false, 00:16:34.458 "flush": false, 00:16:34.458 "reset": true, 00:16:34.458 "nvme_admin": false, 00:16:34.458 "nvme_io": false, 00:16:34.458 "nvme_io_md": false, 00:16:34.458 "write_zeroes": true, 00:16:34.458 "zcopy": false, 00:16:34.458 "get_zone_info": false, 00:16:34.458 "zone_management": false, 00:16:34.458 "zone_append": false, 00:16:34.458 "compare": false, 00:16:34.458 "compare_and_write": false, 00:16:34.458 "abort": false, 00:16:34.458 "seek_hole": false, 00:16:34.458 "seek_data": false, 00:16:34.458 "copy": false, 00:16:34.458 "nvme_iov_md": false 00:16:34.458 }, 00:16:34.458 "memory_domains": [ 00:16:34.458 { 00:16:34.458 "dma_device_id": "system", 00:16:34.458 "dma_device_type": 1 00:16:34.458 }, 00:16:34.458 { 00:16:34.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.458 "dma_device_type": 2 00:16:34.458 }, 00:16:34.458 { 00:16:34.458 "dma_device_id": "system", 00:16:34.458 "dma_device_type": 1 00:16:34.458 }, 00:16:34.458 { 00:16:34.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.458 "dma_device_type": 2 00:16:34.458 } 00:16:34.458 ], 00:16:34.458 "driver_specific": { 00:16:34.458 "raid": { 00:16:34.458 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:34.458 "strip_size_kb": 0, 00:16:34.458 "state": "online", 00:16:34.458 "raid_level": "raid1", 00:16:34.458 "superblock": true, 00:16:34.458 "num_base_bdevs": 2, 00:16:34.458 "num_base_bdevs_discovered": 2, 00:16:34.458 "num_base_bdevs_operational": 2, 00:16:34.458 "base_bdevs_list": [ 00:16:34.458 { 00:16:34.458 "name": "pt1", 00:16:34.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.458 "is_configured": true, 00:16:34.458 "data_offset": 256, 00:16:34.458 "data_size": 7936 00:16:34.458 }, 00:16:34.458 { 00:16:34.458 "name": "pt2", 00:16:34.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.458 "is_configured": true, 00:16:34.458 "data_offset": 256, 00:16:34.458 "data_size": 7936 00:16:34.458 } 00:16:34.458 ] 00:16:34.458 } 00:16:34.458 } 00:16:34.458 }' 00:16:34.458 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.458 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:34.458 pt2' 00:16:34.458 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.459 18:57:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.459 [2024-11-28 18:57:04.018014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 ']' 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.459 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.719 [2024-11-28 18:57:04.061818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.719 [2024-11-28 18:57:04.061843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.719 [2024-11-28 18:57:04.061937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.719 [2024-11-28 18:57:04.061995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.719 [2024-11-28 18:57:04.062009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:34.719 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 [2024-11-28 18:57:04.197869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:34.720 [2024-11-28 18:57:04.199624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:34.720 [2024-11-28 18:57:04.199719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:34.720 [2024-11-28 18:57:04.199813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:34.720 [2024-11-28 18:57:04.199862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.720 [2024-11-28 18:57:04.199883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:34.720 request: 00:16:34.720 { 00:16:34.720 "name": "raid_bdev1", 00:16:34.720 "raid_level": "raid1", 00:16:34.720 "base_bdevs": [ 00:16:34.720 "malloc1", 00:16:34.720 "malloc2" 00:16:34.720 ], 00:16:34.720 "superblock": false, 00:16:34.720 "method": "bdev_raid_create", 00:16:34.720 "req_id": 1 00:16:34.720 } 00:16:34.720 Got JSON-RPC error response 00:16:34.720 response: 00:16:34.720 { 00:16:34.720 "code": -17, 00:16:34.720 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:34.720 } 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 [2024-11-28 18:57:04.265857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.720 [2024-11-28 18:57:04.265965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.720 [2024-11-28 18:57:04.265997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:34.720 [2024-11-28 18:57:04.266028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.720 [2024-11-28 18:57:04.267841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.720 [2024-11-28 18:57:04.267927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.720 [2024-11-28 18:57:04.267987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:34.720 [2024-11-28 18:57:04.268051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.720 pt1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.980 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.980 "name": "raid_bdev1", 00:16:34.980 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:34.980 "strip_size_kb": 0, 00:16:34.980 "state": "configuring", 00:16:34.980 "raid_level": "raid1", 00:16:34.980 "superblock": true, 00:16:34.980 "num_base_bdevs": 2, 00:16:34.980 "num_base_bdevs_discovered": 1, 00:16:34.980 "num_base_bdevs_operational": 2, 00:16:34.980 "base_bdevs_list": [ 00:16:34.980 { 00:16:34.980 "name": "pt1", 00:16:34.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.980 "is_configured": true, 00:16:34.980 "data_offset": 256, 00:16:34.980 "data_size": 7936 00:16:34.980 }, 00:16:34.980 { 00:16:34.980 "name": null, 00:16:34.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.980 "is_configured": false, 00:16:34.980 "data_offset": 256, 00:16:34.980 "data_size": 7936 00:16:34.981 } 00:16:34.981 ] 00:16:34.981 }' 00:16:34.981 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.981 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.241 [2024-11-28 18:57:04.733986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.241 [2024-11-28 18:57:04.734047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.241 [2024-11-28 18:57:04.734067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.241 [2024-11-28 18:57:04.734078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.241 [2024-11-28 18:57:04.734213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.241 [2024-11-28 18:57:04.734227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.241 [2024-11-28 18:57:04.734265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.241 [2024-11-28 18:57:04.734283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.241 [2024-11-28 18:57:04.734355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.241 [2024-11-28 18:57:04.734391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.241 [2024-11-28 18:57:04.734476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:35.241 [2024-11-28 18:57:04.734570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.241 [2024-11-28 18:57:04.734578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:35.241 [2024-11-28 18:57:04.734641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.241 pt2 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.241 "name": "raid_bdev1", 00:16:35.241 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:35.241 "strip_size_kb": 0, 00:16:35.241 "state": "online", 00:16:35.241 "raid_level": "raid1", 00:16:35.241 "superblock": true, 00:16:35.241 "num_base_bdevs": 2, 00:16:35.241 "num_base_bdevs_discovered": 2, 00:16:35.241 "num_base_bdevs_operational": 2, 00:16:35.241 "base_bdevs_list": [ 00:16:35.241 { 00:16:35.241 "name": "pt1", 00:16:35.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.241 "is_configured": true, 00:16:35.241 "data_offset": 256, 00:16:35.241 "data_size": 7936 00:16:35.241 }, 00:16:35.241 { 00:16:35.241 "name": "pt2", 00:16:35.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.241 "is_configured": true, 00:16:35.241 "data_offset": 256, 00:16:35.241 "data_size": 7936 00:16:35.241 } 00:16:35.241 ] 00:16:35.241 }' 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.241 18:57:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.810 [2024-11-28 18:57:05.194342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.810 "name": "raid_bdev1", 00:16:35.810 "aliases": [ 00:16:35.810 "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97" 00:16:35.810 ], 00:16:35.810 "product_name": "Raid Volume", 00:16:35.810 "block_size": 4096, 00:16:35.810 "num_blocks": 7936, 00:16:35.810 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:35.810 "md_size": 32, 00:16:35.810 "md_interleave": false, 00:16:35.810 "dif_type": 0, 00:16:35.810 "assigned_rate_limits": { 00:16:35.810 "rw_ios_per_sec": 0, 00:16:35.810 "rw_mbytes_per_sec": 0, 00:16:35.810 "r_mbytes_per_sec": 0, 00:16:35.810 "w_mbytes_per_sec": 0 00:16:35.810 }, 00:16:35.810 "claimed": false, 00:16:35.810 "zoned": false, 00:16:35.810 "supported_io_types": { 00:16:35.810 "read": true, 00:16:35.810 "write": true, 00:16:35.810 "unmap": false, 00:16:35.810 "flush": false, 00:16:35.810 "reset": true, 00:16:35.810 "nvme_admin": false, 00:16:35.810 "nvme_io": false, 00:16:35.810 "nvme_io_md": false, 00:16:35.810 "write_zeroes": true, 00:16:35.810 "zcopy": false, 00:16:35.810 "get_zone_info": false, 00:16:35.810 "zone_management": false, 00:16:35.810 "zone_append": false, 00:16:35.810 "compare": false, 00:16:35.810 "compare_and_write": false, 00:16:35.810 "abort": false, 00:16:35.810 "seek_hole": false, 00:16:35.810 "seek_data": false, 00:16:35.810 "copy": false, 00:16:35.810 "nvme_iov_md": false 00:16:35.810 }, 00:16:35.810 "memory_domains": [ 00:16:35.810 { 00:16:35.810 "dma_device_id": "system", 00:16:35.810 "dma_device_type": 1 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.810 "dma_device_type": 2 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "dma_device_id": "system", 00:16:35.810 "dma_device_type": 1 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.810 "dma_device_type": 2 00:16:35.810 } 00:16:35.810 ], 00:16:35.810 "driver_specific": { 00:16:35.810 "raid": { 00:16:35.810 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:35.810 "strip_size_kb": 0, 00:16:35.810 "state": "online", 00:16:35.810 "raid_level": "raid1", 00:16:35.810 "superblock": true, 00:16:35.810 "num_base_bdevs": 2, 00:16:35.810 "num_base_bdevs_discovered": 2, 00:16:35.810 "num_base_bdevs_operational": 2, 00:16:35.810 "base_bdevs_list": [ 00:16:35.810 { 00:16:35.810 "name": "pt1", 00:16:35.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.810 "is_configured": true, 00:16:35.810 "data_offset": 256, 00:16:35.810 "data_size": 7936 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "name": "pt2", 00:16:35.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.810 "is_configured": true, 00:16:35.810 "data_offset": 256, 00:16:35.810 "data_size": 7936 00:16:35.810 } 00:16:35.810 ] 00:16:35.810 } 00:16:35.810 } 00:16:35.810 }' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:35.810 pt2' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.810 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.811 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.070 [2024-11-28 18:57:05.442397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 '!=' 5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 ']' 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.070 [2024-11-28 18:57:05.486205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.070 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.070 "name": "raid_bdev1", 00:16:36.070 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:36.070 "strip_size_kb": 0, 00:16:36.070 "state": "online", 00:16:36.070 "raid_level": "raid1", 00:16:36.070 "superblock": true, 00:16:36.070 "num_base_bdevs": 2, 00:16:36.070 "num_base_bdevs_discovered": 1, 00:16:36.070 "num_base_bdevs_operational": 1, 00:16:36.070 "base_bdevs_list": [ 00:16:36.070 { 00:16:36.070 "name": null, 00:16:36.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.070 "is_configured": false, 00:16:36.070 "data_offset": 0, 00:16:36.070 "data_size": 7936 00:16:36.070 }, 00:16:36.070 { 00:16:36.070 "name": "pt2", 00:16:36.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.071 "is_configured": true, 00:16:36.071 "data_offset": 256, 00:16:36.071 "data_size": 7936 00:16:36.071 } 00:16:36.071 ] 00:16:36.071 }' 00:16:36.071 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.071 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 [2024-11-28 18:57:05.962315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.639 [2024-11-28 18:57:05.962393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.639 [2024-11-28 18:57:05.962478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.639 [2024-11-28 18:57:05.962549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.639 [2024-11-28 18:57:05.962583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 18:57:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 [2024-11-28 18:57:06.034333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.639 [2024-11-28 18:57:06.034384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.639 [2024-11-28 18:57:06.034400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:36.639 [2024-11-28 18:57:06.034410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.639 [2024-11-28 18:57:06.036279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.639 [2024-11-28 18:57:06.036324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.639 [2024-11-28 18:57:06.036368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:36.639 [2024-11-28 18:57:06.036400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.639 [2024-11-28 18:57:06.036481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.639 [2024-11-28 18:57:06.036492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:36.639 [2024-11-28 18:57:06.036562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:36.639 [2024-11-28 18:57:06.036640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.639 [2024-11-28 18:57:06.036647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:36.639 [2024-11-28 18:57:06.036706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.639 pt2 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.639 "name": "raid_bdev1", 00:16:36.639 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:36.639 "strip_size_kb": 0, 00:16:36.639 "state": "online", 00:16:36.639 "raid_level": "raid1", 00:16:36.639 "superblock": true, 00:16:36.639 "num_base_bdevs": 2, 00:16:36.639 "num_base_bdevs_discovered": 1, 00:16:36.639 "num_base_bdevs_operational": 1, 00:16:36.639 "base_bdevs_list": [ 00:16:36.639 { 00:16:36.639 "name": null, 00:16:36.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.639 "is_configured": false, 00:16:36.639 "data_offset": 256, 00:16:36.639 "data_size": 7936 00:16:36.639 }, 00:16:36.639 { 00:16:36.639 "name": "pt2", 00:16:36.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.639 "is_configured": true, 00:16:36.639 "data_offset": 256, 00:16:36.639 "data_size": 7936 00:16:36.639 } 00:16:36.639 ] 00:16:36.639 }' 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.639 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.898 [2024-11-28 18:57:06.482453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.898 [2024-11-28 18:57:06.482481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.898 [2024-11-28 18:57:06.482527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.898 [2024-11-28 18:57:06.482562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.898 [2024-11-28 18:57:06.482570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.898 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 [2024-11-28 18:57:06.546481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.158 [2024-11-28 18:57:06.546529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.158 [2024-11-28 18:57:06.546549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:37.158 [2024-11-28 18:57:06.546557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.158 [2024-11-28 18:57:06.548401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.158 [2024-11-28 18:57:06.548462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.158 [2024-11-28 18:57:06.548507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:37.158 [2024-11-28 18:57:06.548532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.158 [2024-11-28 18:57:06.548625] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:37.158 [2024-11-28 18:57:06.548641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.158 [2024-11-28 18:57:06.548656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:37.158 [2024-11-28 18:57:06.548701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.158 [2024-11-28 18:57:06.548761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:37.158 [2024-11-28 18:57:06.548769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:37.158 [2024-11-28 18:57:06.548823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:37.158 [2024-11-28 18:57:06.548893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:37.158 [2024-11-28 18:57:06.548904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:37.158 [2024-11-28 18:57:06.548968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.158 pt1 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.158 "name": "raid_bdev1", 00:16:37.158 "uuid": "5ace4d7c-5371-490b-bb2e-76d1ea9d2a97", 00:16:37.158 "strip_size_kb": 0, 00:16:37.158 "state": "online", 00:16:37.158 "raid_level": "raid1", 00:16:37.158 "superblock": true, 00:16:37.158 "num_base_bdevs": 2, 00:16:37.158 "num_base_bdevs_discovered": 1, 00:16:37.158 "num_base_bdevs_operational": 1, 00:16:37.158 "base_bdevs_list": [ 00:16:37.158 { 00:16:37.158 "name": null, 00:16:37.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.158 "is_configured": false, 00:16:37.158 "data_offset": 256, 00:16:37.158 "data_size": 7936 00:16:37.158 }, 00:16:37.158 { 00:16:37.158 "name": "pt2", 00:16:37.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.158 "is_configured": true, 00:16:37.158 "data_offset": 256, 00:16:37.158 "data_size": 7936 00:16:37.158 } 00:16:37.158 ] 00:16:37.158 }' 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.158 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.418 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:37.418 18:57:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:37.418 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.418 18:57:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.418 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.679 [2024-11-28 18:57:07.046828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 '!=' 5ace4d7c-5371-490b-bb2e-76d1ea9d2a97 ']' 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99276 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99276 ']' 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99276 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99276 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.679 killing process with pid 99276 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99276' 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99276 00:16:37.679 [2024-11-28 18:57:07.128178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.679 [2024-11-28 18:57:07.128241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.679 [2024-11-28 18:57:07.128287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.679 [2024-11-28 18:57:07.128298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:37.679 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99276 00:16:37.679 [2024-11-28 18:57:07.152861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.939 18:57:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:37.939 00:16:37.939 real 0m5.032s 00:16:37.939 user 0m8.243s 00:16:37.939 sys 0m1.152s 00:16:37.939 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.939 18:57:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.939 ************************************ 00:16:37.939 END TEST raid_superblock_test_md_separate 00:16:37.939 ************************************ 00:16:37.939 18:57:07 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:37.939 18:57:07 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:37.939 18:57:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:37.939 18:57:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.939 18:57:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.939 ************************************ 00:16:37.939 START TEST raid_rebuild_test_sb_md_separate 00:16:37.939 ************************************ 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99593 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99593 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99593 ']' 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.939 18:57:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:38.199 Zero copy mechanism will not be used. 00:16:38.199 [2024-11-28 18:57:07.567478] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:38.199 [2024-11-28 18:57:07.567616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99593 ] 00:16:38.199 [2024-11-28 18:57:07.703324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:38.199 [2024-11-28 18:57:07.743489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.199 [2024-11-28 18:57:07.770094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.458 [2024-11-28 18:57:07.813839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.458 [2024-11-28 18:57:07.813882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 BaseBdev1_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 [2024-11-28 18:57:08.399340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.028 [2024-11-28 18:57:08.399396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.028 [2024-11-28 18:57:08.399419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:39.028 [2024-11-28 18:57:08.399445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.028 [2024-11-28 18:57:08.401432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.028 [2024-11-28 18:57:08.401498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.028 BaseBdev1 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 BaseBdev2_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 [2024-11-28 18:57:08.428691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:39.028 [2024-11-28 18:57:08.428748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.028 [2024-11-28 18:57:08.428769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:39.028 [2024-11-28 18:57:08.428781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.028 [2024-11-28 18:57:08.430695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.028 [2024-11-28 18:57:08.430734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:39.028 BaseBdev2 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 spare_malloc 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 spare_delay 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 [2024-11-28 18:57:08.479349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.028 [2024-11-28 18:57:08.479421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.028 [2024-11-28 18:57:08.479452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:39.028 [2024-11-28 18:57:08.479463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.028 [2024-11-28 18:57:08.481291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.028 [2024-11-28 18:57:08.481332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.028 spare 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.028 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.029 [2024-11-28 18:57:08.491407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.029 [2024-11-28 18:57:08.493227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.029 [2024-11-28 18:57:08.493389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:39.029 [2024-11-28 18:57:08.493408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:39.029 [2024-11-28 18:57:08.493490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:39.029 [2024-11-28 18:57:08.493598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:39.029 [2024-11-28 18:57:08.493614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:39.029 [2024-11-28 18:57:08.493723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.029 "name": "raid_bdev1", 00:16:39.029 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:39.029 "strip_size_kb": 0, 00:16:39.029 "state": "online", 00:16:39.029 "raid_level": "raid1", 00:16:39.029 "superblock": true, 00:16:39.029 "num_base_bdevs": 2, 00:16:39.029 "num_base_bdevs_discovered": 2, 00:16:39.029 "num_base_bdevs_operational": 2, 00:16:39.029 "base_bdevs_list": [ 00:16:39.029 { 00:16:39.029 "name": "BaseBdev1", 00:16:39.029 "uuid": "5203c041-f009-5f46-87c9-92858aaf6574", 00:16:39.029 "is_configured": true, 00:16:39.029 "data_offset": 256, 00:16:39.029 "data_size": 7936 00:16:39.029 }, 00:16:39.029 { 00:16:39.029 "name": "BaseBdev2", 00:16:39.029 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:39.029 "is_configured": true, 00:16:39.029 "data_offset": 256, 00:16:39.029 "data_size": 7936 00:16:39.029 } 00:16:39.029 ] 00:16:39.029 }' 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.029 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 [2024-11-28 18:57:08.919766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.600 18:57:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:39.600 [2024-11-28 18:57:09.159577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:39.600 /dev/nbd0 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.860 1+0 records in 00:16:39.860 1+0 records out 00:16:39.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386402 s, 10.6 MB/s 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.860 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:39.861 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.861 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.861 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:39.861 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:39.861 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:40.430 7936+0 records in 00:16:40.430 7936+0 records out 00:16:40.430 32505856 bytes (33 MB, 31 MiB) copied, 0.61396 s, 52.9 MB/s 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.430 18:57:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.691 [2024-11-28 18:57:10.063050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.691 [2024-11-28 18:57:10.075118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.691 "name": "raid_bdev1", 00:16:40.691 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:40.691 "strip_size_kb": 0, 00:16:40.691 "state": "online", 00:16:40.691 "raid_level": "raid1", 00:16:40.691 "superblock": true, 00:16:40.691 "num_base_bdevs": 2, 00:16:40.691 "num_base_bdevs_discovered": 1, 00:16:40.691 "num_base_bdevs_operational": 1, 00:16:40.691 "base_bdevs_list": [ 00:16:40.691 { 00:16:40.691 "name": null, 00:16:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.691 "is_configured": false, 00:16:40.691 "data_offset": 0, 00:16:40.691 "data_size": 7936 00:16:40.691 }, 00:16:40.691 { 00:16:40.691 "name": "BaseBdev2", 00:16:40.691 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:40.691 "is_configured": true, 00:16:40.691 "data_offset": 256, 00:16:40.691 "data_size": 7936 00:16:40.691 } 00:16:40.691 ] 00:16:40.691 }' 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.691 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.951 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:40.951 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.951 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.951 [2024-11-28 18:57:10.515229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.951 [2024-11-28 18:57:10.517785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:40.951 [2024-11-28 18:57:10.519595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.951 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.951 18:57:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.335 "name": "raid_bdev1", 00:16:42.335 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:42.335 "strip_size_kb": 0, 00:16:42.335 "state": "online", 00:16:42.335 "raid_level": "raid1", 00:16:42.335 "superblock": true, 00:16:42.335 "num_base_bdevs": 2, 00:16:42.335 "num_base_bdevs_discovered": 2, 00:16:42.335 "num_base_bdevs_operational": 2, 00:16:42.335 "process": { 00:16:42.335 "type": "rebuild", 00:16:42.335 "target": "spare", 00:16:42.335 "progress": { 00:16:42.335 "blocks": 2560, 00:16:42.335 "percent": 32 00:16:42.335 } 00:16:42.335 }, 00:16:42.335 "base_bdevs_list": [ 00:16:42.335 { 00:16:42.335 "name": "spare", 00:16:42.335 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:42.335 "is_configured": true, 00:16:42.335 "data_offset": 256, 00:16:42.335 "data_size": 7936 00:16:42.335 }, 00:16:42.335 { 00:16:42.335 "name": "BaseBdev2", 00:16:42.335 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:42.335 "is_configured": true, 00:16:42.335 "data_offset": 256, 00:16:42.335 "data_size": 7936 00:16:42.335 } 00:16:42.335 ] 00:16:42.335 }' 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.335 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 [2024-11-28 18:57:11.688982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.336 [2024-11-28 18:57:11.726371] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.336 [2024-11-28 18:57:11.726439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.336 [2024-11-28 18:57:11.726453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.336 [2024-11-28 18:57:11.726462] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.336 "name": "raid_bdev1", 00:16:42.336 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:42.336 "strip_size_kb": 0, 00:16:42.336 "state": "online", 00:16:42.336 "raid_level": "raid1", 00:16:42.336 "superblock": true, 00:16:42.336 "num_base_bdevs": 2, 00:16:42.336 "num_base_bdevs_discovered": 1, 00:16:42.336 "num_base_bdevs_operational": 1, 00:16:42.336 "base_bdevs_list": [ 00:16:42.336 { 00:16:42.336 "name": null, 00:16:42.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.336 "is_configured": false, 00:16:42.336 "data_offset": 0, 00:16:42.336 "data_size": 7936 00:16:42.336 }, 00:16:42.336 { 00:16:42.336 "name": "BaseBdev2", 00:16:42.336 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:42.336 "is_configured": true, 00:16:42.336 "data_offset": 256, 00:16:42.336 "data_size": 7936 00:16:42.336 } 00:16:42.336 ] 00:16:42.336 }' 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.336 18:57:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.596 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.856 "name": "raid_bdev1", 00:16:42.856 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:42.856 "strip_size_kb": 0, 00:16:42.856 "state": "online", 00:16:42.856 "raid_level": "raid1", 00:16:42.856 "superblock": true, 00:16:42.856 "num_base_bdevs": 2, 00:16:42.856 "num_base_bdevs_discovered": 1, 00:16:42.856 "num_base_bdevs_operational": 1, 00:16:42.856 "base_bdevs_list": [ 00:16:42.856 { 00:16:42.856 "name": null, 00:16:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.856 "is_configured": false, 00:16:42.856 "data_offset": 0, 00:16:42.856 "data_size": 7936 00:16:42.856 }, 00:16:42.856 { 00:16:42.856 "name": "BaseBdev2", 00:16:42.856 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:42.856 "is_configured": true, 00:16:42.856 "data_offset": 256, 00:16:42.856 "data_size": 7936 00:16:42.856 } 00:16:42.856 ] 00:16:42.856 }' 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 [2024-11-28 18:57:12.345806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.856 [2024-11-28 18:57:12.348064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:42.856 [2024-11-28 18:57:12.349889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.856 18:57:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.795 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.055 "name": "raid_bdev1", 00:16:44.055 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:44.055 "strip_size_kb": 0, 00:16:44.055 "state": "online", 00:16:44.055 "raid_level": "raid1", 00:16:44.055 "superblock": true, 00:16:44.055 "num_base_bdevs": 2, 00:16:44.055 "num_base_bdevs_discovered": 2, 00:16:44.055 "num_base_bdevs_operational": 2, 00:16:44.055 "process": { 00:16:44.055 "type": "rebuild", 00:16:44.055 "target": "spare", 00:16:44.055 "progress": { 00:16:44.055 "blocks": 2560, 00:16:44.055 "percent": 32 00:16:44.055 } 00:16:44.055 }, 00:16:44.055 "base_bdevs_list": [ 00:16:44.055 { 00:16:44.055 "name": "spare", 00:16:44.055 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:44.055 "is_configured": true, 00:16:44.055 "data_offset": 256, 00:16:44.055 "data_size": 7936 00:16:44.055 }, 00:16:44.055 { 00:16:44.055 "name": "BaseBdev2", 00:16:44.055 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:44.055 "is_configured": true, 00:16:44.055 "data_offset": 256, 00:16:44.055 "data_size": 7936 00:16:44.055 } 00:16:44.055 ] 00:16:44.055 }' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:44.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=586 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.055 "name": "raid_bdev1", 00:16:44.055 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:44.055 "strip_size_kb": 0, 00:16:44.055 "state": "online", 00:16:44.055 "raid_level": "raid1", 00:16:44.055 "superblock": true, 00:16:44.055 "num_base_bdevs": 2, 00:16:44.055 "num_base_bdevs_discovered": 2, 00:16:44.055 "num_base_bdevs_operational": 2, 00:16:44.055 "process": { 00:16:44.055 "type": "rebuild", 00:16:44.055 "target": "spare", 00:16:44.055 "progress": { 00:16:44.055 "blocks": 2816, 00:16:44.055 "percent": 35 00:16:44.055 } 00:16:44.055 }, 00:16:44.055 "base_bdevs_list": [ 00:16:44.055 { 00:16:44.055 "name": "spare", 00:16:44.055 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:44.055 "is_configured": true, 00:16:44.055 "data_offset": 256, 00:16:44.055 "data_size": 7936 00:16:44.055 }, 00:16:44.055 { 00:16:44.055 "name": "BaseBdev2", 00:16:44.055 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:44.055 "is_configured": true, 00:16:44.055 "data_offset": 256, 00:16:44.055 "data_size": 7936 00:16:44.055 } 00:16:44.055 ] 00:16:44.055 }' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.055 18:57:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.436 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.437 "name": "raid_bdev1", 00:16:45.437 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:45.437 "strip_size_kb": 0, 00:16:45.437 "state": "online", 00:16:45.437 "raid_level": "raid1", 00:16:45.437 "superblock": true, 00:16:45.437 "num_base_bdevs": 2, 00:16:45.437 "num_base_bdevs_discovered": 2, 00:16:45.437 "num_base_bdevs_operational": 2, 00:16:45.437 "process": { 00:16:45.437 "type": "rebuild", 00:16:45.437 "target": "spare", 00:16:45.437 "progress": { 00:16:45.437 "blocks": 5888, 00:16:45.437 "percent": 74 00:16:45.437 } 00:16:45.437 }, 00:16:45.437 "base_bdevs_list": [ 00:16:45.437 { 00:16:45.437 "name": "spare", 00:16:45.437 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:45.437 "is_configured": true, 00:16:45.437 "data_offset": 256, 00:16:45.437 "data_size": 7936 00:16:45.437 }, 00:16:45.437 { 00:16:45.437 "name": "BaseBdev2", 00:16:45.437 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:45.437 "is_configured": true, 00:16:45.437 "data_offset": 256, 00:16:45.437 "data_size": 7936 00:16:45.437 } 00:16:45.437 ] 00:16:45.437 }' 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.437 18:57:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.004 [2024-11-28 18:57:15.466043] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:46.004 [2024-11-28 18:57:15.466108] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:46.004 [2024-11-28 18:57:15.466199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.263 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.264 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.264 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.264 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.264 "name": "raid_bdev1", 00:16:46.264 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:46.264 "strip_size_kb": 0, 00:16:46.264 "state": "online", 00:16:46.264 "raid_level": "raid1", 00:16:46.264 "superblock": true, 00:16:46.264 "num_base_bdevs": 2, 00:16:46.264 "num_base_bdevs_discovered": 2, 00:16:46.264 "num_base_bdevs_operational": 2, 00:16:46.264 "base_bdevs_list": [ 00:16:46.264 { 00:16:46.264 "name": "spare", 00:16:46.264 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:46.264 "is_configured": true, 00:16:46.264 "data_offset": 256, 00:16:46.264 "data_size": 7936 00:16:46.264 }, 00:16:46.264 { 00:16:46.264 "name": "BaseBdev2", 00:16:46.264 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:46.264 "is_configured": true, 00:16:46.264 "data_offset": 256, 00:16:46.264 "data_size": 7936 00:16:46.264 } 00:16:46.264 ] 00:16:46.264 }' 00:16:46.264 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.524 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.525 18:57:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.525 "name": "raid_bdev1", 00:16:46.525 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:46.525 "strip_size_kb": 0, 00:16:46.525 "state": "online", 00:16:46.525 "raid_level": "raid1", 00:16:46.525 "superblock": true, 00:16:46.525 "num_base_bdevs": 2, 00:16:46.525 "num_base_bdevs_discovered": 2, 00:16:46.525 "num_base_bdevs_operational": 2, 00:16:46.525 "base_bdevs_list": [ 00:16:46.525 { 00:16:46.525 "name": "spare", 00:16:46.525 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:46.525 "is_configured": true, 00:16:46.525 "data_offset": 256, 00:16:46.525 "data_size": 7936 00:16:46.525 }, 00:16:46.525 { 00:16:46.525 "name": "BaseBdev2", 00:16:46.525 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:46.525 "is_configured": true, 00:16:46.525 "data_offset": 256, 00:16:46.525 "data_size": 7936 00:16:46.525 } 00:16:46.525 ] 00:16:46.525 }' 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.525 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.836 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.836 "name": "raid_bdev1", 00:16:46.836 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:46.836 "strip_size_kb": 0, 00:16:46.836 "state": "online", 00:16:46.836 "raid_level": "raid1", 00:16:46.836 "superblock": true, 00:16:46.836 "num_base_bdevs": 2, 00:16:46.836 "num_base_bdevs_discovered": 2, 00:16:46.836 "num_base_bdevs_operational": 2, 00:16:46.836 "base_bdevs_list": [ 00:16:46.836 { 00:16:46.836 "name": "spare", 00:16:46.836 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:46.836 "is_configured": true, 00:16:46.836 "data_offset": 256, 00:16:46.836 "data_size": 7936 00:16:46.836 }, 00:16:46.836 { 00:16:46.836 "name": "BaseBdev2", 00:16:46.836 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:46.836 "is_configured": true, 00:16:46.836 "data_offset": 256, 00:16:46.836 "data_size": 7936 00:16:46.836 } 00:16:46.836 ] 00:16:46.836 }' 00:16:46.836 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.836 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.121 [2024-11-28 18:57:16.565280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.121 [2024-11-28 18:57:16.565310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.121 [2024-11-28 18:57:16.565393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.121 [2024-11-28 18:57:16.565469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.121 [2024-11-28 18:57:16.565479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.121 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:47.382 /dev/nbd0 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.382 1+0 records in 00:16:47.382 1+0 records out 00:16:47.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405192 s, 10.1 MB/s 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.382 18:57:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:47.642 /dev/nbd1 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.642 1+0 records in 00:16:47.642 1+0 records out 00:16:47.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054644 s, 7.5 MB/s 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.642 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.902 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 [2024-11-28 18:57:17.622972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.163 [2024-11-28 18:57:17.623098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.163 [2024-11-28 18:57:17.623129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:48.163 [2024-11-28 18:57:17.623138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.163 [2024-11-28 18:57:17.625118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.163 [2024-11-28 18:57:17.625161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.163 [2024-11-28 18:57:17.625221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:48.163 [2024-11-28 18:57:17.625268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.163 [2024-11-28 18:57:17.625376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.163 spare 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 [2024-11-28 18:57:17.725446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.163 [2024-11-28 18:57:17.725471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:48.163 [2024-11-28 18:57:17.725555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:48.163 [2024-11-28 18:57:17.725651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.163 [2024-11-28 18:57:17.725664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.163 [2024-11-28 18:57:17.725751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.424 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.424 "name": "raid_bdev1", 00:16:48.424 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:48.424 "strip_size_kb": 0, 00:16:48.424 "state": "online", 00:16:48.424 "raid_level": "raid1", 00:16:48.424 "superblock": true, 00:16:48.424 "num_base_bdevs": 2, 00:16:48.424 "num_base_bdevs_discovered": 2, 00:16:48.424 "num_base_bdevs_operational": 2, 00:16:48.424 "base_bdevs_list": [ 00:16:48.424 { 00:16:48.424 "name": "spare", 00:16:48.424 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:48.424 "is_configured": true, 00:16:48.424 "data_offset": 256, 00:16:48.424 "data_size": 7936 00:16:48.424 }, 00:16:48.424 { 00:16:48.424 "name": "BaseBdev2", 00:16:48.424 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:48.424 "is_configured": true, 00:16:48.424 "data_offset": 256, 00:16:48.424 "data_size": 7936 00:16:48.424 } 00:16:48.424 ] 00:16:48.424 }' 00:16:48.424 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.424 18:57:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.685 "name": "raid_bdev1", 00:16:48.685 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:48.685 "strip_size_kb": 0, 00:16:48.685 "state": "online", 00:16:48.685 "raid_level": "raid1", 00:16:48.685 "superblock": true, 00:16:48.685 "num_base_bdevs": 2, 00:16:48.685 "num_base_bdevs_discovered": 2, 00:16:48.685 "num_base_bdevs_operational": 2, 00:16:48.685 "base_bdevs_list": [ 00:16:48.685 { 00:16:48.685 "name": "spare", 00:16:48.685 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:48.685 "is_configured": true, 00:16:48.685 "data_offset": 256, 00:16:48.685 "data_size": 7936 00:16:48.685 }, 00:16:48.685 { 00:16:48.685 "name": "BaseBdev2", 00:16:48.685 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:48.685 "is_configured": true, 00:16:48.685 "data_offset": 256, 00:16:48.685 "data_size": 7936 00:16:48.685 } 00:16:48.685 ] 00:16:48.685 }' 00:16:48.685 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.945 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.945 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.945 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.945 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.946 [2024-11-28 18:57:18.399203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.946 "name": "raid_bdev1", 00:16:48.946 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:48.946 "strip_size_kb": 0, 00:16:48.946 "state": "online", 00:16:48.946 "raid_level": "raid1", 00:16:48.946 "superblock": true, 00:16:48.946 "num_base_bdevs": 2, 00:16:48.946 "num_base_bdevs_discovered": 1, 00:16:48.946 "num_base_bdevs_operational": 1, 00:16:48.946 "base_bdevs_list": [ 00:16:48.946 { 00:16:48.946 "name": null, 00:16:48.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.946 "is_configured": false, 00:16:48.946 "data_offset": 0, 00:16:48.946 "data_size": 7936 00:16:48.946 }, 00:16:48.946 { 00:16:48.946 "name": "BaseBdev2", 00:16:48.946 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:48.946 "is_configured": true, 00:16:48.946 "data_offset": 256, 00:16:48.946 "data_size": 7936 00:16:48.946 } 00:16:48.946 ] 00:16:48.946 }' 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.946 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.516 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.516 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.516 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.516 [2024-11-28 18:57:18.847343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.516 [2024-11-28 18:57:18.847592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.516 [2024-11-28 18:57:18.847662] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:49.516 [2024-11-28 18:57:18.847717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.516 [2024-11-28 18:57:18.850140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:49.516 [2024-11-28 18:57:18.852069] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.516 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.516 18:57:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.456 "name": "raid_bdev1", 00:16:50.456 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:50.456 "strip_size_kb": 0, 00:16:50.456 "state": "online", 00:16:50.456 "raid_level": "raid1", 00:16:50.456 "superblock": true, 00:16:50.456 "num_base_bdevs": 2, 00:16:50.456 "num_base_bdevs_discovered": 2, 00:16:50.456 "num_base_bdevs_operational": 2, 00:16:50.456 "process": { 00:16:50.456 "type": "rebuild", 00:16:50.456 "target": "spare", 00:16:50.456 "progress": { 00:16:50.456 "blocks": 2560, 00:16:50.456 "percent": 32 00:16:50.456 } 00:16:50.456 }, 00:16:50.456 "base_bdevs_list": [ 00:16:50.456 { 00:16:50.456 "name": "spare", 00:16:50.456 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:50.456 "is_configured": true, 00:16:50.456 "data_offset": 256, 00:16:50.456 "data_size": 7936 00:16:50.456 }, 00:16:50.456 { 00:16:50.456 "name": "BaseBdev2", 00:16:50.456 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:50.456 "is_configured": true, 00:16:50.456 "data_offset": 256, 00:16:50.456 "data_size": 7936 00:16:50.456 } 00:16:50.456 ] 00:16:50.456 }' 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.456 18:57:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.456 [2024-11-28 18:57:19.989987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.456 [2024-11-28 18:57:20.058391] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:50.456 [2024-11-28 18:57:20.058461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.456 [2024-11-28 18:57:20.058476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.456 [2024-11-28 18:57:20.058485] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.717 "name": "raid_bdev1", 00:16:50.717 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:50.717 "strip_size_kb": 0, 00:16:50.717 "state": "online", 00:16:50.717 "raid_level": "raid1", 00:16:50.717 "superblock": true, 00:16:50.717 "num_base_bdevs": 2, 00:16:50.717 "num_base_bdevs_discovered": 1, 00:16:50.717 "num_base_bdevs_operational": 1, 00:16:50.717 "base_bdevs_list": [ 00:16:50.717 { 00:16:50.717 "name": null, 00:16:50.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.717 "is_configured": false, 00:16:50.717 "data_offset": 0, 00:16:50.717 "data_size": 7936 00:16:50.717 }, 00:16:50.717 { 00:16:50.717 "name": "BaseBdev2", 00:16:50.717 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:50.717 "is_configured": true, 00:16:50.717 "data_offset": 256, 00:16:50.717 "data_size": 7936 00:16:50.717 } 00:16:50.717 ] 00:16:50.717 }' 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.717 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.978 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.978 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.978 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.978 [2024-11-28 18:57:20.550105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.978 [2024-11-28 18:57:20.550224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.978 [2024-11-28 18:57:20.550267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:50.978 [2024-11-28 18:57:20.550298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.978 [2024-11-28 18:57:20.550547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.978 [2024-11-28 18:57:20.550608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.978 [2024-11-28 18:57:20.550700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:50.978 [2024-11-28 18:57:20.550746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:50.978 [2024-11-28 18:57:20.550787] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:50.978 [2024-11-28 18:57:20.550896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.978 [2024-11-28 18:57:20.553283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:50.978 [2024-11-28 18:57:20.555268] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.978 spare 00:16:50.978 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.978 18:57:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.359 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.359 "name": "raid_bdev1", 00:16:52.359 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:52.359 "strip_size_kb": 0, 00:16:52.359 "state": "online", 00:16:52.360 "raid_level": "raid1", 00:16:52.360 "superblock": true, 00:16:52.360 "num_base_bdevs": 2, 00:16:52.360 "num_base_bdevs_discovered": 2, 00:16:52.360 "num_base_bdevs_operational": 2, 00:16:52.360 "process": { 00:16:52.360 "type": "rebuild", 00:16:52.360 "target": "spare", 00:16:52.360 "progress": { 00:16:52.360 "blocks": 2560, 00:16:52.360 "percent": 32 00:16:52.360 } 00:16:52.360 }, 00:16:52.360 "base_bdevs_list": [ 00:16:52.360 { 00:16:52.360 "name": "spare", 00:16:52.360 "uuid": "4868b894-53e8-5bfa-b957-a9382ccabbb3", 00:16:52.360 "is_configured": true, 00:16:52.360 "data_offset": 256, 00:16:52.360 "data_size": 7936 00:16:52.360 }, 00:16:52.360 { 00:16:52.360 "name": "BaseBdev2", 00:16:52.360 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:52.360 "is_configured": true, 00:16:52.360 "data_offset": 256, 00:16:52.360 "data_size": 7936 00:16:52.360 } 00:16:52.360 ] 00:16:52.360 }' 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.360 [2024-11-28 18:57:21.720290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.360 [2024-11-28 18:57:21.761615] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.360 [2024-11-28 18:57:21.761711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.360 [2024-11-28 18:57:21.761747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.360 [2024-11-28 18:57:21.761768] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.360 "name": "raid_bdev1", 00:16:52.360 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:52.360 "strip_size_kb": 0, 00:16:52.360 "state": "online", 00:16:52.360 "raid_level": "raid1", 00:16:52.360 "superblock": true, 00:16:52.360 "num_base_bdevs": 2, 00:16:52.360 "num_base_bdevs_discovered": 1, 00:16:52.360 "num_base_bdevs_operational": 1, 00:16:52.360 "base_bdevs_list": [ 00:16:52.360 { 00:16:52.360 "name": null, 00:16:52.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.360 "is_configured": false, 00:16:52.360 "data_offset": 0, 00:16:52.360 "data_size": 7936 00:16:52.360 }, 00:16:52.360 { 00:16:52.360 "name": "BaseBdev2", 00:16:52.360 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:52.360 "is_configured": true, 00:16:52.360 "data_offset": 256, 00:16:52.360 "data_size": 7936 00:16:52.360 } 00:16:52.360 ] 00:16:52.360 }' 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.360 18:57:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.620 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.880 "name": "raid_bdev1", 00:16:52.880 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:52.880 "strip_size_kb": 0, 00:16:52.880 "state": "online", 00:16:52.880 "raid_level": "raid1", 00:16:52.880 "superblock": true, 00:16:52.880 "num_base_bdevs": 2, 00:16:52.880 "num_base_bdevs_discovered": 1, 00:16:52.880 "num_base_bdevs_operational": 1, 00:16:52.880 "base_bdevs_list": [ 00:16:52.880 { 00:16:52.880 "name": null, 00:16:52.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.880 "is_configured": false, 00:16:52.880 "data_offset": 0, 00:16:52.880 "data_size": 7936 00:16:52.880 }, 00:16:52.880 { 00:16:52.880 "name": "BaseBdev2", 00:16:52.880 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:52.880 "is_configured": true, 00:16:52.880 "data_offset": 256, 00:16:52.880 "data_size": 7936 00:16:52.880 } 00:16:52.880 ] 00:16:52.880 }' 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.880 [2024-11-28 18:57:22.381144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:52.880 [2024-11-28 18:57:22.381192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.880 [2024-11-28 18:57:22.381212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:52.880 [2024-11-28 18:57:22.381220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.880 [2024-11-28 18:57:22.381402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.880 [2024-11-28 18:57:22.381417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:52.880 [2024-11-28 18:57:22.381488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:52.880 [2024-11-28 18:57:22.381516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:52.880 [2024-11-28 18:57:22.381525] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:52.880 [2024-11-28 18:57:22.381543] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:52.880 BaseBdev1 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.880 18:57:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.820 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:53.821 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.080 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.080 "name": "raid_bdev1", 00:16:54.080 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:54.080 "strip_size_kb": 0, 00:16:54.080 "state": "online", 00:16:54.080 "raid_level": "raid1", 00:16:54.080 "superblock": true, 00:16:54.080 "num_base_bdevs": 2, 00:16:54.080 "num_base_bdevs_discovered": 1, 00:16:54.080 "num_base_bdevs_operational": 1, 00:16:54.080 "base_bdevs_list": [ 00:16:54.080 { 00:16:54.080 "name": null, 00:16:54.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.080 "is_configured": false, 00:16:54.080 "data_offset": 0, 00:16:54.080 "data_size": 7936 00:16:54.080 }, 00:16:54.080 { 00:16:54.080 "name": "BaseBdev2", 00:16:54.080 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:54.080 "is_configured": true, 00:16:54.080 "data_offset": 256, 00:16:54.080 "data_size": 7936 00:16:54.080 } 00:16:54.080 ] 00:16:54.080 }' 00:16:54.080 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.080 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.340 "name": "raid_bdev1", 00:16:54.340 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:54.340 "strip_size_kb": 0, 00:16:54.340 "state": "online", 00:16:54.340 "raid_level": "raid1", 00:16:54.340 "superblock": true, 00:16:54.340 "num_base_bdevs": 2, 00:16:54.340 "num_base_bdevs_discovered": 1, 00:16:54.340 "num_base_bdevs_operational": 1, 00:16:54.340 "base_bdevs_list": [ 00:16:54.340 { 00:16:54.340 "name": null, 00:16:54.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.340 "is_configured": false, 00:16:54.340 "data_offset": 0, 00:16:54.340 "data_size": 7936 00:16:54.340 }, 00:16:54.340 { 00:16:54.340 "name": "BaseBdev2", 00:16:54.340 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:54.340 "is_configured": true, 00:16:54.340 "data_offset": 256, 00:16:54.340 "data_size": 7936 00:16:54.340 } 00:16:54.340 ] 00:16:54.340 }' 00:16:54.340 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.601 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.601 18:57:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.601 [2024-11-28 18:57:24.013617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.601 [2024-11-28 18:57:24.013781] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.601 [2024-11-28 18:57:24.013797] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:54.601 request: 00:16:54.601 { 00:16:54.601 "base_bdev": "BaseBdev1", 00:16:54.601 "raid_bdev": "raid_bdev1", 00:16:54.601 "method": "bdev_raid_add_base_bdev", 00:16:54.601 "req_id": 1 00:16:54.601 } 00:16:54.601 Got JSON-RPC error response 00:16:54.601 response: 00:16:54.601 { 00:16:54.601 "code": -22, 00:16:54.601 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:54.601 } 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.601 18:57:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.540 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.540 "name": "raid_bdev1", 00:16:55.541 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:55.541 "strip_size_kb": 0, 00:16:55.541 "state": "online", 00:16:55.541 "raid_level": "raid1", 00:16:55.541 "superblock": true, 00:16:55.541 "num_base_bdevs": 2, 00:16:55.541 "num_base_bdevs_discovered": 1, 00:16:55.541 "num_base_bdevs_operational": 1, 00:16:55.541 "base_bdevs_list": [ 00:16:55.541 { 00:16:55.541 "name": null, 00:16:55.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.541 "is_configured": false, 00:16:55.541 "data_offset": 0, 00:16:55.541 "data_size": 7936 00:16:55.541 }, 00:16:55.541 { 00:16:55.541 "name": "BaseBdev2", 00:16:55.541 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:55.541 "is_configured": true, 00:16:55.541 "data_offset": 256, 00:16:55.541 "data_size": 7936 00:16:55.541 } 00:16:55.541 ] 00:16:55.541 }' 00:16:55.541 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.541 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.110 "name": "raid_bdev1", 00:16:56.110 "uuid": "7273a080-bb82-4e6c-ba9f-ce84d4e24155", 00:16:56.110 "strip_size_kb": 0, 00:16:56.110 "state": "online", 00:16:56.110 "raid_level": "raid1", 00:16:56.110 "superblock": true, 00:16:56.110 "num_base_bdevs": 2, 00:16:56.110 "num_base_bdevs_discovered": 1, 00:16:56.110 "num_base_bdevs_operational": 1, 00:16:56.110 "base_bdevs_list": [ 00:16:56.110 { 00:16:56.110 "name": null, 00:16:56.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.110 "is_configured": false, 00:16:56.110 "data_offset": 0, 00:16:56.110 "data_size": 7936 00:16:56.110 }, 00:16:56.110 { 00:16:56.110 "name": "BaseBdev2", 00:16:56.110 "uuid": "0779dd6d-e921-5b7c-bead-30693cd5c00f", 00:16:56.110 "is_configured": true, 00:16:56.110 "data_offset": 256, 00:16:56.110 "data_size": 7936 00:16:56.110 } 00:16:56.110 ] 00:16:56.110 }' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99593 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99593 ']' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99593 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99593 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99593' 00:16:56.110 killing process with pid 99593 00:16:56.110 Received shutdown signal, test time was about 60.000000 seconds 00:16:56.110 00:16:56.110 Latency(us) 00:16:56.110 [2024-11-28T18:57:25.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.110 [2024-11-28T18:57:25.716Z] =================================================================================================================== 00:16:56.110 [2024-11-28T18:57:25.716Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99593 00:16:56.110 [2024-11-28 18:57:25.634023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.110 [2024-11-28 18:57:25.634174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.110 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99593 00:16:56.110 [2024-11-28 18:57:25.634225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.110 [2024-11-28 18:57:25.634237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:56.110 [2024-11-28 18:57:25.667531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.371 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:56.371 00:16:56.371 real 0m18.410s 00:16:56.371 user 0m24.533s 00:16:56.371 sys 0m2.656s 00:16:56.371 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.371 18:57:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.371 ************************************ 00:16:56.371 END TEST raid_rebuild_test_sb_md_separate 00:16:56.371 ************************************ 00:16:56.371 18:57:25 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:56.371 18:57:25 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:56.371 18:57:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:56.371 18:57:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.371 18:57:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.371 ************************************ 00:16:56.371 START TEST raid_state_function_test_sb_md_interleaved 00:16:56.371 ************************************ 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:56.371 Process raid pid: 100267 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100267 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100267' 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100267 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100267 ']' 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.371 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.372 18:57:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.631 [2024-11-28 18:57:26.062528] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:56.631 [2024-11-28 18:57:26.062740] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.631 [2024-11-28 18:57:26.204766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:56.891 [2024-11-28 18:57:26.241852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.891 [2024-11-28 18:57:26.268686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.891 [2024-11-28 18:57:26.311709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.891 [2024-11-28 18:57:26.311831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.462 [2024-11-28 18:57:26.875711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.462 [2024-11-28 18:57:26.875862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.462 [2024-11-28 18:57:26.875909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.462 [2024-11-28 18:57:26.875931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.462 "name": "Existed_Raid", 00:16:57.462 "uuid": "8b292962-c168-4768-b0a6-76dbc6f1d9b9", 00:16:57.462 "strip_size_kb": 0, 00:16:57.462 "state": "configuring", 00:16:57.462 "raid_level": "raid1", 00:16:57.462 "superblock": true, 00:16:57.462 "num_base_bdevs": 2, 00:16:57.462 "num_base_bdevs_discovered": 0, 00:16:57.462 "num_base_bdevs_operational": 2, 00:16:57.462 "base_bdevs_list": [ 00:16:57.462 { 00:16:57.462 "name": "BaseBdev1", 00:16:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.462 "is_configured": false, 00:16:57.462 "data_offset": 0, 00:16:57.462 "data_size": 0 00:16:57.462 }, 00:16:57.462 { 00:16:57.462 "name": "BaseBdev2", 00:16:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.462 "is_configured": false, 00:16:57.462 "data_offset": 0, 00:16:57.462 "data_size": 0 00:16:57.462 } 00:16:57.462 ] 00:16:57.462 }' 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.462 18:57:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.031 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.031 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 [2024-11-28 18:57:27.343744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.032 [2024-11-28 18:57:27.343864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 [2024-11-28 18:57:27.351773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:58.032 [2024-11-28 18:57:27.351877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:58.032 [2024-11-28 18:57:27.351908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.032 [2024-11-28 18:57:27.351929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 [2024-11-28 18:57:27.372873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.032 BaseBdev1 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 [ 00:16:58.032 { 00:16:58.032 "name": "BaseBdev1", 00:16:58.032 "aliases": [ 00:16:58.032 "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c" 00:16:58.032 ], 00:16:58.032 "product_name": "Malloc disk", 00:16:58.032 "block_size": 4128, 00:16:58.032 "num_blocks": 8192, 00:16:58.032 "uuid": "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c", 00:16:58.032 "md_size": 32, 00:16:58.032 "md_interleave": true, 00:16:58.032 "dif_type": 0, 00:16:58.032 "assigned_rate_limits": { 00:16:58.032 "rw_ios_per_sec": 0, 00:16:58.032 "rw_mbytes_per_sec": 0, 00:16:58.032 "r_mbytes_per_sec": 0, 00:16:58.032 "w_mbytes_per_sec": 0 00:16:58.032 }, 00:16:58.032 "claimed": true, 00:16:58.032 "claim_type": "exclusive_write", 00:16:58.032 "zoned": false, 00:16:58.032 "supported_io_types": { 00:16:58.032 "read": true, 00:16:58.032 "write": true, 00:16:58.032 "unmap": true, 00:16:58.032 "flush": true, 00:16:58.032 "reset": true, 00:16:58.032 "nvme_admin": false, 00:16:58.032 "nvme_io": false, 00:16:58.032 "nvme_io_md": false, 00:16:58.032 "write_zeroes": true, 00:16:58.032 "zcopy": true, 00:16:58.032 "get_zone_info": false, 00:16:58.032 "zone_management": false, 00:16:58.032 "zone_append": false, 00:16:58.032 "compare": false, 00:16:58.032 "compare_and_write": false, 00:16:58.032 "abort": true, 00:16:58.032 "seek_hole": false, 00:16:58.032 "seek_data": false, 00:16:58.032 "copy": true, 00:16:58.032 "nvme_iov_md": false 00:16:58.032 }, 00:16:58.032 "memory_domains": [ 00:16:58.032 { 00:16:58.032 "dma_device_id": "system", 00:16:58.032 "dma_device_type": 1 00:16:58.032 }, 00:16:58.032 { 00:16:58.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.032 "dma_device_type": 2 00:16:58.032 } 00:16:58.032 ], 00:16:58.032 "driver_specific": {} 00:16:58.032 } 00:16:58.032 ] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.032 "name": "Existed_Raid", 00:16:58.032 "uuid": "a3696889-ef6d-4add-83aa-df689ec2eb53", 00:16:58.032 "strip_size_kb": 0, 00:16:58.032 "state": "configuring", 00:16:58.032 "raid_level": "raid1", 00:16:58.032 "superblock": true, 00:16:58.032 "num_base_bdevs": 2, 00:16:58.032 "num_base_bdevs_discovered": 1, 00:16:58.032 "num_base_bdevs_operational": 2, 00:16:58.032 "base_bdevs_list": [ 00:16:58.032 { 00:16:58.032 "name": "BaseBdev1", 00:16:58.032 "uuid": "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c", 00:16:58.032 "is_configured": true, 00:16:58.032 "data_offset": 256, 00:16:58.032 "data_size": 7936 00:16:58.032 }, 00:16:58.032 { 00:16:58.032 "name": "BaseBdev2", 00:16:58.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.032 "is_configured": false, 00:16:58.032 "data_offset": 0, 00:16:58.032 "data_size": 0 00:16:58.032 } 00:16:58.032 ] 00:16:58.032 }' 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.032 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.292 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.293 [2024-11-28 18:57:27.861021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.293 [2024-11-28 18:57:27.861064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.293 [2024-11-28 18:57:27.873130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.293 [2024-11-28 18:57:27.874926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.293 [2024-11-28 18:57:27.875011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.293 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.552 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.552 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.552 "name": "Existed_Raid", 00:16:58.552 "uuid": "c4b5dec7-677d-410c-8191-68df6c51cbc7", 00:16:58.552 "strip_size_kb": 0, 00:16:58.552 "state": "configuring", 00:16:58.552 "raid_level": "raid1", 00:16:58.552 "superblock": true, 00:16:58.552 "num_base_bdevs": 2, 00:16:58.552 "num_base_bdevs_discovered": 1, 00:16:58.552 "num_base_bdevs_operational": 2, 00:16:58.552 "base_bdevs_list": [ 00:16:58.552 { 00:16:58.552 "name": "BaseBdev1", 00:16:58.552 "uuid": "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c", 00:16:58.552 "is_configured": true, 00:16:58.552 "data_offset": 256, 00:16:58.552 "data_size": 7936 00:16:58.552 }, 00:16:58.552 { 00:16:58.552 "name": "BaseBdev2", 00:16:58.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.552 "is_configured": false, 00:16:58.552 "data_offset": 0, 00:16:58.552 "data_size": 0 00:16:58.552 } 00:16:58.552 ] 00:16:58.552 }' 00:16:58.552 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.552 18:57:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.813 [2024-11-28 18:57:28.308495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.813 [2024-11-28 18:57:28.308793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:58.813 [2024-11-28 18:57:28.308845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:58.813 BaseBdev2 00:16:58.813 [2024-11-28 18:57:28.308994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:58.813 [2024-11-28 18:57:28.309081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:58.813 [2024-11-28 18:57:28.309090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:58.813 [2024-11-28 18:57:28.309168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:58.813 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.814 [ 00:16:58.814 { 00:16:58.814 "name": "BaseBdev2", 00:16:58.814 "aliases": [ 00:16:58.814 "45edfc83-e072-4d8c-ae17-7c361ccd7798" 00:16:58.814 ], 00:16:58.814 "product_name": "Malloc disk", 00:16:58.814 "block_size": 4128, 00:16:58.814 "num_blocks": 8192, 00:16:58.814 "uuid": "45edfc83-e072-4d8c-ae17-7c361ccd7798", 00:16:58.814 "md_size": 32, 00:16:58.814 "md_interleave": true, 00:16:58.814 "dif_type": 0, 00:16:58.814 "assigned_rate_limits": { 00:16:58.814 "rw_ios_per_sec": 0, 00:16:58.814 "rw_mbytes_per_sec": 0, 00:16:58.814 "r_mbytes_per_sec": 0, 00:16:58.814 "w_mbytes_per_sec": 0 00:16:58.814 }, 00:16:58.814 "claimed": true, 00:16:58.814 "claim_type": "exclusive_write", 00:16:58.814 "zoned": false, 00:16:58.814 "supported_io_types": { 00:16:58.814 "read": true, 00:16:58.814 "write": true, 00:16:58.814 "unmap": true, 00:16:58.814 "flush": true, 00:16:58.814 "reset": true, 00:16:58.814 "nvme_admin": false, 00:16:58.814 "nvme_io": false, 00:16:58.814 "nvme_io_md": false, 00:16:58.814 "write_zeroes": true, 00:16:58.814 "zcopy": true, 00:16:58.814 "get_zone_info": false, 00:16:58.814 "zone_management": false, 00:16:58.814 "zone_append": false, 00:16:58.814 "compare": false, 00:16:58.814 "compare_and_write": false, 00:16:58.814 "abort": true, 00:16:58.814 "seek_hole": false, 00:16:58.814 "seek_data": false, 00:16:58.814 "copy": true, 00:16:58.814 "nvme_iov_md": false 00:16:58.814 }, 00:16:58.814 "memory_domains": [ 00:16:58.814 { 00:16:58.814 "dma_device_id": "system", 00:16:58.814 "dma_device_type": 1 00:16:58.814 }, 00:16:58.814 { 00:16:58.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.814 "dma_device_type": 2 00:16:58.814 } 00:16:58.814 ], 00:16:58.814 "driver_specific": {} 00:16:58.814 } 00:16:58.814 ] 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.814 "name": "Existed_Raid", 00:16:58.814 "uuid": "c4b5dec7-677d-410c-8191-68df6c51cbc7", 00:16:58.814 "strip_size_kb": 0, 00:16:58.814 "state": "online", 00:16:58.814 "raid_level": "raid1", 00:16:58.814 "superblock": true, 00:16:58.814 "num_base_bdevs": 2, 00:16:58.814 "num_base_bdevs_discovered": 2, 00:16:58.814 "num_base_bdevs_operational": 2, 00:16:58.814 "base_bdevs_list": [ 00:16:58.814 { 00:16:58.814 "name": "BaseBdev1", 00:16:58.814 "uuid": "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c", 00:16:58.814 "is_configured": true, 00:16:58.814 "data_offset": 256, 00:16:58.814 "data_size": 7936 00:16:58.814 }, 00:16:58.814 { 00:16:58.814 "name": "BaseBdev2", 00:16:58.814 "uuid": "45edfc83-e072-4d8c-ae17-7c361ccd7798", 00:16:58.814 "is_configured": true, 00:16:58.814 "data_offset": 256, 00:16:58.814 "data_size": 7936 00:16:58.814 } 00:16:58.814 ] 00:16:58.814 }' 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.814 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.385 [2024-11-28 18:57:28.812949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.385 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.385 "name": "Existed_Raid", 00:16:59.385 "aliases": [ 00:16:59.385 "c4b5dec7-677d-410c-8191-68df6c51cbc7" 00:16:59.385 ], 00:16:59.385 "product_name": "Raid Volume", 00:16:59.385 "block_size": 4128, 00:16:59.385 "num_blocks": 7936, 00:16:59.385 "uuid": "c4b5dec7-677d-410c-8191-68df6c51cbc7", 00:16:59.385 "md_size": 32, 00:16:59.385 "md_interleave": true, 00:16:59.385 "dif_type": 0, 00:16:59.385 "assigned_rate_limits": { 00:16:59.385 "rw_ios_per_sec": 0, 00:16:59.385 "rw_mbytes_per_sec": 0, 00:16:59.385 "r_mbytes_per_sec": 0, 00:16:59.385 "w_mbytes_per_sec": 0 00:16:59.385 }, 00:16:59.385 "claimed": false, 00:16:59.385 "zoned": false, 00:16:59.385 "supported_io_types": { 00:16:59.385 "read": true, 00:16:59.385 "write": true, 00:16:59.385 "unmap": false, 00:16:59.385 "flush": false, 00:16:59.385 "reset": true, 00:16:59.385 "nvme_admin": false, 00:16:59.385 "nvme_io": false, 00:16:59.385 "nvme_io_md": false, 00:16:59.385 "write_zeroes": true, 00:16:59.385 "zcopy": false, 00:16:59.385 "get_zone_info": false, 00:16:59.385 "zone_management": false, 00:16:59.385 "zone_append": false, 00:16:59.385 "compare": false, 00:16:59.385 "compare_and_write": false, 00:16:59.385 "abort": false, 00:16:59.385 "seek_hole": false, 00:16:59.385 "seek_data": false, 00:16:59.385 "copy": false, 00:16:59.385 "nvme_iov_md": false 00:16:59.385 }, 00:16:59.385 "memory_domains": [ 00:16:59.385 { 00:16:59.385 "dma_device_id": "system", 00:16:59.385 "dma_device_type": 1 00:16:59.385 }, 00:16:59.385 { 00:16:59.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.385 "dma_device_type": 2 00:16:59.385 }, 00:16:59.385 { 00:16:59.385 "dma_device_id": "system", 00:16:59.385 "dma_device_type": 1 00:16:59.385 }, 00:16:59.385 { 00:16:59.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.385 "dma_device_type": 2 00:16:59.385 } 00:16:59.385 ], 00:16:59.385 "driver_specific": { 00:16:59.385 "raid": { 00:16:59.385 "uuid": "c4b5dec7-677d-410c-8191-68df6c51cbc7", 00:16:59.385 "strip_size_kb": 0, 00:16:59.385 "state": "online", 00:16:59.385 "raid_level": "raid1", 00:16:59.385 "superblock": true, 00:16:59.385 "num_base_bdevs": 2, 00:16:59.385 "num_base_bdevs_discovered": 2, 00:16:59.385 "num_base_bdevs_operational": 2, 00:16:59.385 "base_bdevs_list": [ 00:16:59.385 { 00:16:59.385 "name": "BaseBdev1", 00:16:59.385 "uuid": "8bfdfc71-31c0-4deb-b6ae-4cd8cd70e88c", 00:16:59.385 "is_configured": true, 00:16:59.385 "data_offset": 256, 00:16:59.385 "data_size": 7936 00:16:59.385 }, 00:16:59.385 { 00:16:59.385 "name": "BaseBdev2", 00:16:59.385 "uuid": "45edfc83-e072-4d8c-ae17-7c361ccd7798", 00:16:59.385 "is_configured": true, 00:16:59.385 "data_offset": 256, 00:16:59.386 "data_size": 7936 00:16:59.386 } 00:16:59.386 ] 00:16:59.386 } 00:16:59.386 } 00:16:59.386 }' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:59.386 BaseBdev2' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.386 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.646 18:57:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.646 [2024-11-28 18:57:29.048810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.646 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.647 "name": "Existed_Raid", 00:16:59.647 "uuid": "c4b5dec7-677d-410c-8191-68df6c51cbc7", 00:16:59.647 "strip_size_kb": 0, 00:16:59.647 "state": "online", 00:16:59.647 "raid_level": "raid1", 00:16:59.647 "superblock": true, 00:16:59.647 "num_base_bdevs": 2, 00:16:59.647 "num_base_bdevs_discovered": 1, 00:16:59.647 "num_base_bdevs_operational": 1, 00:16:59.647 "base_bdevs_list": [ 00:16:59.647 { 00:16:59.647 "name": null, 00:16:59.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.647 "is_configured": false, 00:16:59.647 "data_offset": 0, 00:16:59.647 "data_size": 7936 00:16:59.647 }, 00:16:59.647 { 00:16:59.647 "name": "BaseBdev2", 00:16:59.647 "uuid": "45edfc83-e072-4d8c-ae17-7c361ccd7798", 00:16:59.647 "is_configured": true, 00:16:59.647 "data_offset": 256, 00:16:59.647 "data_size": 7936 00:16:59.647 } 00:16:59.647 ] 00:16:59.647 }' 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.647 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.217 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.217 [2024-11-28 18:57:29.592921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.217 [2024-11-28 18:57:29.593043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.217 [2024-11-28 18:57:29.605187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.217 [2024-11-28 18:57:29.605286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.218 [2024-11-28 18:57:29.605325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100267 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100267 ']' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100267 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100267 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100267' 00:17:00.218 killing process with pid 100267 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100267 00:17:00.218 [2024-11-28 18:57:29.701869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.218 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100267 00:17:00.218 [2024-11-28 18:57:29.702886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.478 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.478 00:17:00.478 real 0m3.973s 00:17:00.478 user 0m6.245s 00:17:00.478 sys 0m0.878s 00:17:00.478 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.478 18:57:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.478 ************************************ 00:17:00.478 END TEST raid_state_function_test_sb_md_interleaved 00:17:00.478 ************************************ 00:17:00.478 18:57:29 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:00.478 18:57:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:00.478 18:57:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.478 18:57:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.478 ************************************ 00:17:00.478 START TEST raid_superblock_test_md_interleaved 00:17:00.478 ************************************ 00:17:00.478 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:00.478 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:00.478 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:00.478 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:00.478 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100508 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100508 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100508 ']' 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.479 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.739 [2024-11-28 18:57:30.101251] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:00.739 [2024-11-28 18:57:30.101462] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100508 ] 00:17:00.739 [2024-11-28 18:57:30.235758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.739 [2024-11-28 18:57:30.273583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.739 [2024-11-28 18:57:30.300208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.998 [2024-11-28 18:57:30.343754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.998 [2024-11-28 18:57:30.343794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 malloc1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 [2024-11-28 18:57:30.936795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.570 [2024-11-28 18:57:30.936943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.570 [2024-11-28 18:57:30.936986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.570 [2024-11-28 18:57:30.937013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.570 [2024-11-28 18:57:30.938913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.570 [2024-11-28 18:57:30.938988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.570 pt1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 malloc2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 [2024-11-28 18:57:30.969598] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.570 [2024-11-28 18:57:30.969649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.570 [2024-11-28 18:57:30.969668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.570 [2024-11-28 18:57:30.969676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.570 [2024-11-28 18:57:30.971499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.570 [2024-11-28 18:57:30.971589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.570 pt2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 [2024-11-28 18:57:30.981635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.570 [2024-11-28 18:57:30.983334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.570 [2024-11-28 18:57:30.983551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:01.570 [2024-11-28 18:57:30.983585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:01.570 [2024-11-28 18:57:30.983722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:01.570 [2024-11-28 18:57:30.983845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:01.570 [2024-11-28 18:57:30.983898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:01.570 [2024-11-28 18:57:30.983996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.570 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.571 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.571 18:57:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.571 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.571 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.571 "name": "raid_bdev1", 00:17:01.571 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:01.571 "strip_size_kb": 0, 00:17:01.571 "state": "online", 00:17:01.571 "raid_level": "raid1", 00:17:01.571 "superblock": true, 00:17:01.571 "num_base_bdevs": 2, 00:17:01.571 "num_base_bdevs_discovered": 2, 00:17:01.571 "num_base_bdevs_operational": 2, 00:17:01.571 "base_bdevs_list": [ 00:17:01.571 { 00:17:01.571 "name": "pt1", 00:17:01.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.571 "is_configured": true, 00:17:01.571 "data_offset": 256, 00:17:01.571 "data_size": 7936 00:17:01.571 }, 00:17:01.571 { 00:17:01.571 "name": "pt2", 00:17:01.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.571 "is_configured": true, 00:17:01.571 "data_offset": 256, 00:17:01.571 "data_size": 7936 00:17:01.571 } 00:17:01.571 ] 00:17:01.571 }' 00:17:01.571 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.571 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.831 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.831 [2024-11-28 18:57:31.418008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.091 "name": "raid_bdev1", 00:17:02.091 "aliases": [ 00:17:02.091 "a1a95bf7-186f-4392-b4af-94bd874516fa" 00:17:02.091 ], 00:17:02.091 "product_name": "Raid Volume", 00:17:02.091 "block_size": 4128, 00:17:02.091 "num_blocks": 7936, 00:17:02.091 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:02.091 "md_size": 32, 00:17:02.091 "md_interleave": true, 00:17:02.091 "dif_type": 0, 00:17:02.091 "assigned_rate_limits": { 00:17:02.091 "rw_ios_per_sec": 0, 00:17:02.091 "rw_mbytes_per_sec": 0, 00:17:02.091 "r_mbytes_per_sec": 0, 00:17:02.091 "w_mbytes_per_sec": 0 00:17:02.091 }, 00:17:02.091 "claimed": false, 00:17:02.091 "zoned": false, 00:17:02.091 "supported_io_types": { 00:17:02.091 "read": true, 00:17:02.091 "write": true, 00:17:02.091 "unmap": false, 00:17:02.091 "flush": false, 00:17:02.091 "reset": true, 00:17:02.091 "nvme_admin": false, 00:17:02.091 "nvme_io": false, 00:17:02.091 "nvme_io_md": false, 00:17:02.091 "write_zeroes": true, 00:17:02.091 "zcopy": false, 00:17:02.091 "get_zone_info": false, 00:17:02.091 "zone_management": false, 00:17:02.091 "zone_append": false, 00:17:02.091 "compare": false, 00:17:02.091 "compare_and_write": false, 00:17:02.091 "abort": false, 00:17:02.091 "seek_hole": false, 00:17:02.091 "seek_data": false, 00:17:02.091 "copy": false, 00:17:02.091 "nvme_iov_md": false 00:17:02.091 }, 00:17:02.091 "memory_domains": [ 00:17:02.091 { 00:17:02.091 "dma_device_id": "system", 00:17:02.091 "dma_device_type": 1 00:17:02.091 }, 00:17:02.091 { 00:17:02.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.091 "dma_device_type": 2 00:17:02.091 }, 00:17:02.091 { 00:17:02.091 "dma_device_id": "system", 00:17:02.091 "dma_device_type": 1 00:17:02.091 }, 00:17:02.091 { 00:17:02.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.091 "dma_device_type": 2 00:17:02.091 } 00:17:02.091 ], 00:17:02.091 "driver_specific": { 00:17:02.091 "raid": { 00:17:02.091 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:02.091 "strip_size_kb": 0, 00:17:02.091 "state": "online", 00:17:02.091 "raid_level": "raid1", 00:17:02.091 "superblock": true, 00:17:02.091 "num_base_bdevs": 2, 00:17:02.091 "num_base_bdevs_discovered": 2, 00:17:02.091 "num_base_bdevs_operational": 2, 00:17:02.091 "base_bdevs_list": [ 00:17:02.091 { 00:17:02.091 "name": "pt1", 00:17:02.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.091 "is_configured": true, 00:17:02.091 "data_offset": 256, 00:17:02.091 "data_size": 7936 00:17:02.091 }, 00:17:02.091 { 00:17:02.091 "name": "pt2", 00:17:02.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.091 "is_configured": true, 00:17:02.091 "data_offset": 256, 00:17:02.091 "data_size": 7936 00:17:02.091 } 00:17:02.091 ] 00:17:02.091 } 00:17:02.091 } 00:17:02.091 }' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:02.091 pt2' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.091 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:02.092 [2024-11-28 18:57:31.633986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a1a95bf7-186f-4392-b4af-94bd874516fa 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a1a95bf7-186f-4392-b4af-94bd874516fa ']' 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.092 [2024-11-28 18:57:31.685799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.092 [2024-11-28 18:57:31.685822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.092 [2024-11-28 18:57:31.685896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.092 [2024-11-28 18:57:31.685954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.092 [2024-11-28 18:57:31.685967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:02.092 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.352 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 [2024-11-28 18:57:31.821842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.352 [2024-11-28 18:57:31.823686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.352 [2024-11-28 18:57:31.823787] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.352 [2024-11-28 18:57:31.823885] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.352 [2024-11-28 18:57:31.823942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.352 [2024-11-28 18:57:31.823960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:17:02.352 request: 00:17:02.352 { 00:17:02.352 "name": "raid_bdev1", 00:17:02.352 "raid_level": "raid1", 00:17:02.352 "base_bdevs": [ 00:17:02.352 "malloc1", 00:17:02.352 "malloc2" 00:17:02.352 ], 00:17:02.352 "superblock": false, 00:17:02.352 "method": "bdev_raid_create", 00:17:02.352 "req_id": 1 00:17:02.352 } 00:17:02.352 Got JSON-RPC error response 00:17:02.353 response: 00:17:02.353 { 00:17:02.353 "code": -17, 00:17:02.353 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.353 } 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.353 [2024-11-28 18:57:31.873823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.353 [2024-11-28 18:57:31.873916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.353 [2024-11-28 18:57:31.873949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:02.353 [2024-11-28 18:57:31.873978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.353 [2024-11-28 18:57:31.875816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.353 [2024-11-28 18:57:31.875915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.353 [2024-11-28 18:57:31.875994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.353 [2024-11-28 18:57:31.876062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.353 pt1 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.353 "name": "raid_bdev1", 00:17:02.353 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:02.353 "strip_size_kb": 0, 00:17:02.353 "state": "configuring", 00:17:02.353 "raid_level": "raid1", 00:17:02.353 "superblock": true, 00:17:02.353 "num_base_bdevs": 2, 00:17:02.353 "num_base_bdevs_discovered": 1, 00:17:02.353 "num_base_bdevs_operational": 2, 00:17:02.353 "base_bdevs_list": [ 00:17:02.353 { 00:17:02.353 "name": "pt1", 00:17:02.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.353 "is_configured": true, 00:17:02.353 "data_offset": 256, 00:17:02.353 "data_size": 7936 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "name": null, 00:17:02.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.353 "is_configured": false, 00:17:02.353 "data_offset": 256, 00:17:02.353 "data_size": 7936 00:17:02.353 } 00:17:02.353 ] 00:17:02.353 }' 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.353 18:57:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.923 [2024-11-28 18:57:32.337952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.923 [2024-11-28 18:57:32.338010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.923 [2024-11-28 18:57:32.338028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:02.923 [2024-11-28 18:57:32.338038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.923 [2024-11-28 18:57:32.338161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.923 [2024-11-28 18:57:32.338175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.923 [2024-11-28 18:57:32.338212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:02.923 [2024-11-28 18:57:32.338229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.923 [2024-11-28 18:57:32.338293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:02.923 [2024-11-28 18:57:32.338303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:02.923 [2024-11-28 18:57:32.338365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:02.923 [2024-11-28 18:57:32.338447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:02.923 [2024-11-28 18:57:32.338455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:02.923 [2024-11-28 18:57:32.338510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.923 pt2 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.923 "name": "raid_bdev1", 00:17:02.923 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:02.923 "strip_size_kb": 0, 00:17:02.923 "state": "online", 00:17:02.923 "raid_level": "raid1", 00:17:02.923 "superblock": true, 00:17:02.923 "num_base_bdevs": 2, 00:17:02.923 "num_base_bdevs_discovered": 2, 00:17:02.923 "num_base_bdevs_operational": 2, 00:17:02.923 "base_bdevs_list": [ 00:17:02.923 { 00:17:02.923 "name": "pt1", 00:17:02.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.923 "is_configured": true, 00:17:02.923 "data_offset": 256, 00:17:02.923 "data_size": 7936 00:17:02.923 }, 00:17:02.923 { 00:17:02.923 "name": "pt2", 00:17:02.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.923 "is_configured": true, 00:17:02.923 "data_offset": 256, 00:17:02.923 "data_size": 7936 00:17:02.923 } 00:17:02.923 ] 00:17:02.923 }' 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.923 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.493 [2024-11-28 18:57:32.814318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.493 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.493 "name": "raid_bdev1", 00:17:03.493 "aliases": [ 00:17:03.493 "a1a95bf7-186f-4392-b4af-94bd874516fa" 00:17:03.493 ], 00:17:03.493 "product_name": "Raid Volume", 00:17:03.493 "block_size": 4128, 00:17:03.493 "num_blocks": 7936, 00:17:03.493 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:03.493 "md_size": 32, 00:17:03.493 "md_interleave": true, 00:17:03.493 "dif_type": 0, 00:17:03.493 "assigned_rate_limits": { 00:17:03.493 "rw_ios_per_sec": 0, 00:17:03.493 "rw_mbytes_per_sec": 0, 00:17:03.493 "r_mbytes_per_sec": 0, 00:17:03.493 "w_mbytes_per_sec": 0 00:17:03.493 }, 00:17:03.493 "claimed": false, 00:17:03.493 "zoned": false, 00:17:03.493 "supported_io_types": { 00:17:03.493 "read": true, 00:17:03.493 "write": true, 00:17:03.493 "unmap": false, 00:17:03.493 "flush": false, 00:17:03.493 "reset": true, 00:17:03.493 "nvme_admin": false, 00:17:03.493 "nvme_io": false, 00:17:03.493 "nvme_io_md": false, 00:17:03.493 "write_zeroes": true, 00:17:03.493 "zcopy": false, 00:17:03.493 "get_zone_info": false, 00:17:03.493 "zone_management": false, 00:17:03.493 "zone_append": false, 00:17:03.493 "compare": false, 00:17:03.493 "compare_and_write": false, 00:17:03.493 "abort": false, 00:17:03.493 "seek_hole": false, 00:17:03.493 "seek_data": false, 00:17:03.493 "copy": false, 00:17:03.493 "nvme_iov_md": false 00:17:03.493 }, 00:17:03.493 "memory_domains": [ 00:17:03.493 { 00:17:03.493 "dma_device_id": "system", 00:17:03.493 "dma_device_type": 1 00:17:03.493 }, 00:17:03.493 { 00:17:03.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.493 "dma_device_type": 2 00:17:03.493 }, 00:17:03.493 { 00:17:03.493 "dma_device_id": "system", 00:17:03.493 "dma_device_type": 1 00:17:03.493 }, 00:17:03.493 { 00:17:03.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.493 "dma_device_type": 2 00:17:03.493 } 00:17:03.493 ], 00:17:03.493 "driver_specific": { 00:17:03.493 "raid": { 00:17:03.493 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:03.493 "strip_size_kb": 0, 00:17:03.493 "state": "online", 00:17:03.493 "raid_level": "raid1", 00:17:03.493 "superblock": true, 00:17:03.493 "num_base_bdevs": 2, 00:17:03.493 "num_base_bdevs_discovered": 2, 00:17:03.494 "num_base_bdevs_operational": 2, 00:17:03.494 "base_bdevs_list": [ 00:17:03.494 { 00:17:03.494 "name": "pt1", 00:17:03.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.494 "is_configured": true, 00:17:03.494 "data_offset": 256, 00:17:03.494 "data_size": 7936 00:17:03.494 }, 00:17:03.494 { 00:17:03.494 "name": "pt2", 00:17:03.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.494 "is_configured": true, 00:17:03.494 "data_offset": 256, 00:17:03.494 "data_size": 7936 00:17:03.494 } 00:17:03.494 ] 00:17:03.494 } 00:17:03.494 } 00:17:03.494 }' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:03.494 pt2' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.494 18:57:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 [2024-11-28 18:57:33.050353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a1a95bf7-186f-4392-b4af-94bd874516fa '!=' a1a95bf7-186f-4392-b4af-94bd874516fa ']' 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 [2024-11-28 18:57:33.090150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.494 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.753 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.753 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.754 "name": "raid_bdev1", 00:17:03.754 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:03.754 "strip_size_kb": 0, 00:17:03.754 "state": "online", 00:17:03.754 "raid_level": "raid1", 00:17:03.754 "superblock": true, 00:17:03.754 "num_base_bdevs": 2, 00:17:03.754 "num_base_bdevs_discovered": 1, 00:17:03.754 "num_base_bdevs_operational": 1, 00:17:03.754 "base_bdevs_list": [ 00:17:03.754 { 00:17:03.754 "name": null, 00:17:03.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.754 "is_configured": false, 00:17:03.754 "data_offset": 0, 00:17:03.754 "data_size": 7936 00:17:03.754 }, 00:17:03.754 { 00:17:03.754 "name": "pt2", 00:17:03.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.754 "is_configured": true, 00:17:03.754 "data_offset": 256, 00:17:03.754 "data_size": 7936 00:17:03.754 } 00:17:03.754 ] 00:17:03.754 }' 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.754 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.014 [2024-11-28 18:57:33.590261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.014 [2024-11-28 18:57:33.590334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.014 [2024-11-28 18:57:33.590413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.014 [2024-11-28 18:57:33.590471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.014 [2024-11-28 18:57:33.590483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:04.014 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.274 [2024-11-28 18:57:33.666284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.274 [2024-11-28 18:57:33.666400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.274 [2024-11-28 18:57:33.666442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:04.274 [2024-11-28 18:57:33.666474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.274 [2024-11-28 18:57:33.668383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.274 [2024-11-28 18:57:33.668483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.274 [2024-11-28 18:57:33.668551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.274 [2024-11-28 18:57:33.668601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.274 [2024-11-28 18:57:33.668693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:04.274 [2024-11-28 18:57:33.668732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:04.274 [2024-11-28 18:57:33.668834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:04.274 [2024-11-28 18:57:33.668929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:04.274 [2024-11-28 18:57:33.668964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:04.274 [2024-11-28 18:57:33.669054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.274 pt2 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.274 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.275 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.275 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.275 "name": "raid_bdev1", 00:17:04.275 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:04.275 "strip_size_kb": 0, 00:17:04.275 "state": "online", 00:17:04.275 "raid_level": "raid1", 00:17:04.275 "superblock": true, 00:17:04.275 "num_base_bdevs": 2, 00:17:04.275 "num_base_bdevs_discovered": 1, 00:17:04.275 "num_base_bdevs_operational": 1, 00:17:04.275 "base_bdevs_list": [ 00:17:04.275 { 00:17:04.275 "name": null, 00:17:04.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.275 "is_configured": false, 00:17:04.275 "data_offset": 256, 00:17:04.275 "data_size": 7936 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "pt2", 00:17:04.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 256, 00:17:04.275 "data_size": 7936 00:17:04.275 } 00:17:04.275 ] 00:17:04.275 }' 00:17:04.275 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.275 18:57:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.534 [2024-11-28 18:57:34.126439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.534 [2024-11-28 18:57:34.126470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.534 [2024-11-28 18:57:34.126543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.534 [2024-11-28 18:57:34.126599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.534 [2024-11-28 18:57:34.126609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:04.534 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 [2024-11-28 18:57:34.186450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:04.794 [2024-11-28 18:57:34.186548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.794 [2024-11-28 18:57:34.186584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:04.794 [2024-11-28 18:57:34.186610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.794 [2024-11-28 18:57:34.188542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.794 [2024-11-28 18:57:34.188607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:04.794 [2024-11-28 18:57:34.188689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:04.794 [2024-11-28 18:57:34.188735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.794 [2024-11-28 18:57:34.188843] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:04.794 [2024-11-28 18:57:34.188897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.794 [2024-11-28 18:57:34.188942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:17:04.794 [2024-11-28 18:57:34.189018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.794 [2024-11-28 18:57:34.189120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:04.794 [2024-11-28 18:57:34.189160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:04.794 [2024-11-28 18:57:34.189243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:04.794 [2024-11-28 18:57:34.189331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:04.794 [2024-11-28 18:57:34.189367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:04.794 [2024-11-28 18:57:34.189472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.794 pt1 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.794 "name": "raid_bdev1", 00:17:04.794 "uuid": "a1a95bf7-186f-4392-b4af-94bd874516fa", 00:17:04.794 "strip_size_kb": 0, 00:17:04.794 "state": "online", 00:17:04.794 "raid_level": "raid1", 00:17:04.794 "superblock": true, 00:17:04.794 "num_base_bdevs": 2, 00:17:04.794 "num_base_bdevs_discovered": 1, 00:17:04.794 "num_base_bdevs_operational": 1, 00:17:04.794 "base_bdevs_list": [ 00:17:04.794 { 00:17:04.794 "name": null, 00:17:04.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.794 "is_configured": false, 00:17:04.794 "data_offset": 256, 00:17:04.794 "data_size": 7936 00:17:04.794 }, 00:17:04.794 { 00:17:04.794 "name": "pt2", 00:17:04.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.794 "is_configured": true, 00:17:04.794 "data_offset": 256, 00:17:04.794 "data_size": 7936 00:17:04.794 } 00:17:04.794 ] 00:17:04.794 }' 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.794 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.054 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:05.054 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:05.054 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.054 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:05.314 [2024-11-28 18:57:34.694779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a1a95bf7-186f-4392-b4af-94bd874516fa '!=' a1a95bf7-186f-4392-b4af-94bd874516fa ']' 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100508 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100508 ']' 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100508 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100508 00:17:05.314 killing process with pid 100508 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100508' 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 100508 00:17:05.314 [2024-11-28 18:57:34.780749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.314 [2024-11-28 18:57:34.780823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.314 [2024-11-28 18:57:34.780860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.314 [2024-11-28 18:57:34.780870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:05.314 18:57:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 100508 00:17:05.314 [2024-11-28 18:57:34.804656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.574 ************************************ 00:17:05.574 END TEST raid_superblock_test_md_interleaved 00:17:05.574 18:57:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:05.574 00:17:05.574 real 0m5.015s 00:17:05.574 user 0m8.211s 00:17:05.574 sys 0m1.119s 00:17:05.574 18:57:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.574 18:57:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.574 ************************************ 00:17:05.574 18:57:35 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:05.574 18:57:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:05.574 18:57:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.574 18:57:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.574 ************************************ 00:17:05.574 START TEST raid_rebuild_test_sb_md_interleaved 00:17:05.574 ************************************ 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=100824 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 100824 00:17:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100824 ']' 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.574 18:57:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.833 Zero copy mechanism will not be used. 00:17:05.833 [2024-11-28 18:57:35.196800] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:05.833 [2024-11-28 18:57:35.196911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100824 ] 00:17:05.833 [2024-11-28 18:57:35.329637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.833 [2024-11-28 18:57:35.367093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.833 [2024-11-28 18:57:35.394069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.092 [2024-11-28 18:57:35.437610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.092 [2024-11-28 18:57:35.437647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.663 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 BaseBdev1_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 [2024-11-28 18:57:36.026550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.664 [2024-11-28 18:57:36.026635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.664 [2024-11-28 18:57:36.026656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.664 [2024-11-28 18:57:36.026678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.664 [2024-11-28 18:57:36.028655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.664 [2024-11-28 18:57:36.028769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.664 BaseBdev1 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 BaseBdev2_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 [2024-11-28 18:57:36.051555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.664 [2024-11-28 18:57:36.051609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.664 [2024-11-28 18:57:36.051628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.664 [2024-11-28 18:57:36.051638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.664 [2024-11-28 18:57:36.053526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.664 [2024-11-28 18:57:36.053562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.664 BaseBdev2 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 spare_malloc 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 spare_delay 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 [2024-11-28 18:57:36.108172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.664 [2024-11-28 18:57:36.108248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.664 [2024-11-28 18:57:36.108282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:06.664 [2024-11-28 18:57:36.108299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.664 [2024-11-28 18:57:36.110914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.664 [2024-11-28 18:57:36.110966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.664 spare 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 [2024-11-28 18:57:36.120169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.664 [2024-11-28 18:57:36.122063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.664 [2024-11-28 18:57:36.122315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:06.664 [2024-11-28 18:57:36.122341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:06.664 [2024-11-28 18:57:36.122437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:06.664 [2024-11-28 18:57:36.122513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:06.664 [2024-11-28 18:57:36.122521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:06.664 [2024-11-28 18:57:36.122589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.664 "name": "raid_bdev1", 00:17:06.664 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:06.664 "strip_size_kb": 0, 00:17:06.664 "state": "online", 00:17:06.664 "raid_level": "raid1", 00:17:06.664 "superblock": true, 00:17:06.664 "num_base_bdevs": 2, 00:17:06.664 "num_base_bdevs_discovered": 2, 00:17:06.664 "num_base_bdevs_operational": 2, 00:17:06.664 "base_bdevs_list": [ 00:17:06.664 { 00:17:06.664 "name": "BaseBdev1", 00:17:06.664 "uuid": "ce8318fd-e285-5205-a4b8-8a3a6752696a", 00:17:06.664 "is_configured": true, 00:17:06.664 "data_offset": 256, 00:17:06.664 "data_size": 7936 00:17:06.664 }, 00:17:06.664 { 00:17:06.664 "name": "BaseBdev2", 00:17:06.664 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:06.664 "is_configured": true, 00:17:06.664 "data_offset": 256, 00:17:06.664 "data_size": 7936 00:17:06.664 } 00:17:06.664 ] 00:17:06.664 }' 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.664 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.235 [2024-11-28 18:57:36.576575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 [2024-11-28 18:57:36.676268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.235 "name": "raid_bdev1", 00:17:07.235 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:07.235 "strip_size_kb": 0, 00:17:07.235 "state": "online", 00:17:07.235 "raid_level": "raid1", 00:17:07.235 "superblock": true, 00:17:07.235 "num_base_bdevs": 2, 00:17:07.235 "num_base_bdevs_discovered": 1, 00:17:07.235 "num_base_bdevs_operational": 1, 00:17:07.235 "base_bdevs_list": [ 00:17:07.235 { 00:17:07.235 "name": null, 00:17:07.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.235 "is_configured": false, 00:17:07.235 "data_offset": 0, 00:17:07.235 "data_size": 7936 00:17:07.235 }, 00:17:07.235 { 00:17:07.235 "name": "BaseBdev2", 00:17:07.235 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:07.235 "is_configured": true, 00:17:07.235 "data_offset": 256, 00:17:07.235 "data_size": 7936 00:17:07.235 } 00:17:07.235 ] 00:17:07.235 }' 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.235 18:57:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.805 18:57:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.805 18:57:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.805 18:57:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.805 [2024-11-28 18:57:37.148421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.805 [2024-11-28 18:57:37.152160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:07.805 [2024-11-28 18:57:37.154073] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.805 18:57:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.805 18:57:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.807 "name": "raid_bdev1", 00:17:08.807 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:08.807 "strip_size_kb": 0, 00:17:08.807 "state": "online", 00:17:08.807 "raid_level": "raid1", 00:17:08.807 "superblock": true, 00:17:08.807 "num_base_bdevs": 2, 00:17:08.807 "num_base_bdevs_discovered": 2, 00:17:08.807 "num_base_bdevs_operational": 2, 00:17:08.807 "process": { 00:17:08.807 "type": "rebuild", 00:17:08.807 "target": "spare", 00:17:08.807 "progress": { 00:17:08.807 "blocks": 2560, 00:17:08.807 "percent": 32 00:17:08.807 } 00:17:08.807 }, 00:17:08.807 "base_bdevs_list": [ 00:17:08.807 { 00:17:08.807 "name": "spare", 00:17:08.807 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:08.807 "is_configured": true, 00:17:08.807 "data_offset": 256, 00:17:08.807 "data_size": 7936 00:17:08.807 }, 00:17:08.807 { 00:17:08.807 "name": "BaseBdev2", 00:17:08.807 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:08.807 "is_configured": true, 00:17:08.807 "data_offset": 256, 00:17:08.807 "data_size": 7936 00:17:08.807 } 00:17:08.807 ] 00:17:08.807 }' 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.807 [2024-11-28 18:57:38.320051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.807 [2024-11-28 18:57:38.361031] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.807 [2024-11-28 18:57:38.361096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.807 [2024-11-28 18:57:38.361111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.807 [2024-11-28 18:57:38.361123] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.807 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.087 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.087 "name": "raid_bdev1", 00:17:09.087 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:09.087 "strip_size_kb": 0, 00:17:09.087 "state": "online", 00:17:09.087 "raid_level": "raid1", 00:17:09.087 "superblock": true, 00:17:09.087 "num_base_bdevs": 2, 00:17:09.087 "num_base_bdevs_discovered": 1, 00:17:09.087 "num_base_bdevs_operational": 1, 00:17:09.087 "base_bdevs_list": [ 00:17:09.087 { 00:17:09.087 "name": null, 00:17:09.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.087 "is_configured": false, 00:17:09.087 "data_offset": 0, 00:17:09.087 "data_size": 7936 00:17:09.087 }, 00:17:09.087 { 00:17:09.087 "name": "BaseBdev2", 00:17:09.087 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:09.087 "is_configured": true, 00:17:09.087 "data_offset": 256, 00:17:09.087 "data_size": 7936 00:17:09.087 } 00:17:09.087 ] 00:17:09.087 }' 00:17:09.087 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.087 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.347 "name": "raid_bdev1", 00:17:09.347 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:09.347 "strip_size_kb": 0, 00:17:09.347 "state": "online", 00:17:09.347 "raid_level": "raid1", 00:17:09.347 "superblock": true, 00:17:09.347 "num_base_bdevs": 2, 00:17:09.347 "num_base_bdevs_discovered": 1, 00:17:09.347 "num_base_bdevs_operational": 1, 00:17:09.347 "base_bdevs_list": [ 00:17:09.347 { 00:17:09.347 "name": null, 00:17:09.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.347 "is_configured": false, 00:17:09.347 "data_offset": 0, 00:17:09.347 "data_size": 7936 00:17:09.347 }, 00:17:09.347 { 00:17:09.347 "name": "BaseBdev2", 00:17:09.347 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:09.347 "is_configured": true, 00:17:09.347 "data_offset": 256, 00:17:09.347 "data_size": 7936 00:17:09.347 } 00:17:09.347 ] 00:17:09.347 }' 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.347 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.607 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.607 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.607 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.607 18:57:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.608 [2024-11-28 18:57:38.997350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.608 [2024-11-28 18:57:39.001042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.608 [2024-11-28 18:57:39.002923] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.608 18:57:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.608 18:57:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.547 "name": "raid_bdev1", 00:17:10.547 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:10.547 "strip_size_kb": 0, 00:17:10.547 "state": "online", 00:17:10.547 "raid_level": "raid1", 00:17:10.547 "superblock": true, 00:17:10.547 "num_base_bdevs": 2, 00:17:10.547 "num_base_bdevs_discovered": 2, 00:17:10.547 "num_base_bdevs_operational": 2, 00:17:10.547 "process": { 00:17:10.547 "type": "rebuild", 00:17:10.547 "target": "spare", 00:17:10.547 "progress": { 00:17:10.547 "blocks": 2560, 00:17:10.547 "percent": 32 00:17:10.547 } 00:17:10.547 }, 00:17:10.547 "base_bdevs_list": [ 00:17:10.547 { 00:17:10.547 "name": "spare", 00:17:10.547 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:10.547 "is_configured": true, 00:17:10.547 "data_offset": 256, 00:17:10.547 "data_size": 7936 00:17:10.547 }, 00:17:10.547 { 00:17:10.547 "name": "BaseBdev2", 00:17:10.547 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:10.547 "is_configured": true, 00:17:10.547 "data_offset": 256, 00:17:10.547 "data_size": 7936 00:17:10.547 } 00:17:10.547 ] 00:17:10.547 }' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:10.547 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=613 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.547 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.548 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.548 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.807 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.807 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.807 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.808 "name": "raid_bdev1", 00:17:10.808 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:10.808 "strip_size_kb": 0, 00:17:10.808 "state": "online", 00:17:10.808 "raid_level": "raid1", 00:17:10.808 "superblock": true, 00:17:10.808 "num_base_bdevs": 2, 00:17:10.808 "num_base_bdevs_discovered": 2, 00:17:10.808 "num_base_bdevs_operational": 2, 00:17:10.808 "process": { 00:17:10.808 "type": "rebuild", 00:17:10.808 "target": "spare", 00:17:10.808 "progress": { 00:17:10.808 "blocks": 2816, 00:17:10.808 "percent": 35 00:17:10.808 } 00:17:10.808 }, 00:17:10.808 "base_bdevs_list": [ 00:17:10.808 { 00:17:10.808 "name": "spare", 00:17:10.808 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:10.808 "is_configured": true, 00:17:10.808 "data_offset": 256, 00:17:10.808 "data_size": 7936 00:17:10.808 }, 00:17:10.808 { 00:17:10.808 "name": "BaseBdev2", 00:17:10.808 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:10.808 "is_configured": true, 00:17:10.808 "data_offset": 256, 00:17:10.808 "data_size": 7936 00:17:10.808 } 00:17:10.808 ] 00:17:10.808 }' 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.808 18:57:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.748 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.749 "name": "raid_bdev1", 00:17:11.749 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:11.749 "strip_size_kb": 0, 00:17:11.749 "state": "online", 00:17:11.749 "raid_level": "raid1", 00:17:11.749 "superblock": true, 00:17:11.749 "num_base_bdevs": 2, 00:17:11.749 "num_base_bdevs_discovered": 2, 00:17:11.749 "num_base_bdevs_operational": 2, 00:17:11.749 "process": { 00:17:11.749 "type": "rebuild", 00:17:11.749 "target": "spare", 00:17:11.749 "progress": { 00:17:11.749 "blocks": 5632, 00:17:11.749 "percent": 70 00:17:11.749 } 00:17:11.749 }, 00:17:11.749 "base_bdevs_list": [ 00:17:11.749 { 00:17:11.749 "name": "spare", 00:17:11.749 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:11.749 "is_configured": true, 00:17:11.749 "data_offset": 256, 00:17:11.749 "data_size": 7936 00:17:11.749 }, 00:17:11.749 { 00:17:11.749 "name": "BaseBdev2", 00:17:11.749 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:11.749 "is_configured": true, 00:17:11.749 "data_offset": 256, 00:17:11.749 "data_size": 7936 00:17:11.749 } 00:17:11.749 ] 00:17:11.749 }' 00:17:11.749 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.008 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.008 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.009 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.009 18:57:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.577 [2024-11-28 18:57:42.119239] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:12.577 [2024-11-28 18:57:42.119309] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:12.577 [2024-11-28 18:57:42.119399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.837 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.096 "name": "raid_bdev1", 00:17:13.096 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:13.096 "strip_size_kb": 0, 00:17:13.096 "state": "online", 00:17:13.096 "raid_level": "raid1", 00:17:13.096 "superblock": true, 00:17:13.096 "num_base_bdevs": 2, 00:17:13.096 "num_base_bdevs_discovered": 2, 00:17:13.096 "num_base_bdevs_operational": 2, 00:17:13.096 "base_bdevs_list": [ 00:17:13.096 { 00:17:13.096 "name": "spare", 00:17:13.096 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:13.096 "is_configured": true, 00:17:13.096 "data_offset": 256, 00:17:13.096 "data_size": 7936 00:17:13.096 }, 00:17:13.096 { 00:17:13.096 "name": "BaseBdev2", 00:17:13.096 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:13.096 "is_configured": true, 00:17:13.096 "data_offset": 256, 00:17:13.096 "data_size": 7936 00:17:13.096 } 00:17:13.096 ] 00:17:13.096 }' 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.096 "name": "raid_bdev1", 00:17:13.096 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:13.096 "strip_size_kb": 0, 00:17:13.096 "state": "online", 00:17:13.096 "raid_level": "raid1", 00:17:13.096 "superblock": true, 00:17:13.096 "num_base_bdevs": 2, 00:17:13.096 "num_base_bdevs_discovered": 2, 00:17:13.096 "num_base_bdevs_operational": 2, 00:17:13.096 "base_bdevs_list": [ 00:17:13.096 { 00:17:13.096 "name": "spare", 00:17:13.096 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:13.096 "is_configured": true, 00:17:13.096 "data_offset": 256, 00:17:13.096 "data_size": 7936 00:17:13.096 }, 00:17:13.096 { 00:17:13.096 "name": "BaseBdev2", 00:17:13.097 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:13.097 "is_configured": true, 00:17:13.097 "data_offset": 256, 00:17:13.097 "data_size": 7936 00:17:13.097 } 00:17:13.097 ] 00:17:13.097 }' 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.097 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.356 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.356 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.356 "name": "raid_bdev1", 00:17:13.356 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:13.356 "strip_size_kb": 0, 00:17:13.356 "state": "online", 00:17:13.356 "raid_level": "raid1", 00:17:13.356 "superblock": true, 00:17:13.356 "num_base_bdevs": 2, 00:17:13.356 "num_base_bdevs_discovered": 2, 00:17:13.356 "num_base_bdevs_operational": 2, 00:17:13.356 "base_bdevs_list": [ 00:17:13.356 { 00:17:13.356 "name": "spare", 00:17:13.356 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:13.356 "is_configured": true, 00:17:13.356 "data_offset": 256, 00:17:13.356 "data_size": 7936 00:17:13.356 }, 00:17:13.356 { 00:17:13.356 "name": "BaseBdev2", 00:17:13.356 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:13.356 "is_configured": true, 00:17:13.356 "data_offset": 256, 00:17:13.356 "data_size": 7936 00:17:13.356 } 00:17:13.356 ] 00:17:13.356 }' 00:17:13.356 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.356 18:57:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.616 [2024-11-28 18:57:43.119609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.616 [2024-11-28 18:57:43.119701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.616 [2024-11-28 18:57:43.119829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.616 [2024-11-28 18:57:43.119940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.616 [2024-11-28 18:57:43.120011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.616 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.616 [2024-11-28 18:57:43.195603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.616 [2024-11-28 18:57:43.195655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.616 [2024-11-28 18:57:43.195675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:13.616 [2024-11-28 18:57:43.195684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.617 [2024-11-28 18:57:43.197582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.617 [2024-11-28 18:57:43.197619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.617 [2024-11-28 18:57:43.197671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.617 [2024-11-28 18:57:43.197707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.617 [2024-11-28 18:57:43.197801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.617 spare 00:17:13.617 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.617 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.617 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.617 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.876 [2024-11-28 18:57:43.297851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.876 [2024-11-28 18:57:43.297880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:13.876 [2024-11-28 18:57:43.297970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:17:13.876 [2024-11-28 18:57:43.298041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.876 [2024-11-28 18:57:43.298048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.876 [2024-11-28 18:57:43.298123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.876 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.877 "name": "raid_bdev1", 00:17:13.877 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:13.877 "strip_size_kb": 0, 00:17:13.877 "state": "online", 00:17:13.877 "raid_level": "raid1", 00:17:13.877 "superblock": true, 00:17:13.877 "num_base_bdevs": 2, 00:17:13.877 "num_base_bdevs_discovered": 2, 00:17:13.877 "num_base_bdevs_operational": 2, 00:17:13.877 "base_bdevs_list": [ 00:17:13.877 { 00:17:13.877 "name": "spare", 00:17:13.877 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:13.877 "is_configured": true, 00:17:13.877 "data_offset": 256, 00:17:13.877 "data_size": 7936 00:17:13.877 }, 00:17:13.877 { 00:17:13.877 "name": "BaseBdev2", 00:17:13.877 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:13.877 "is_configured": true, 00:17:13.877 "data_offset": 256, 00:17:13.877 "data_size": 7936 00:17:13.877 } 00:17:13.877 ] 00:17:13.877 }' 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.877 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.136 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.136 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.136 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.136 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.136 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.396 "name": "raid_bdev1", 00:17:14.396 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:14.396 "strip_size_kb": 0, 00:17:14.396 "state": "online", 00:17:14.396 "raid_level": "raid1", 00:17:14.396 "superblock": true, 00:17:14.396 "num_base_bdevs": 2, 00:17:14.396 "num_base_bdevs_discovered": 2, 00:17:14.396 "num_base_bdevs_operational": 2, 00:17:14.396 "base_bdevs_list": [ 00:17:14.396 { 00:17:14.396 "name": "spare", 00:17:14.396 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:14.396 "is_configured": true, 00:17:14.396 "data_offset": 256, 00:17:14.396 "data_size": 7936 00:17:14.396 }, 00:17:14.396 { 00:17:14.396 "name": "BaseBdev2", 00:17:14.396 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:14.396 "is_configured": true, 00:17:14.396 "data_offset": 256, 00:17:14.396 "data_size": 7936 00:17:14.396 } 00:17:14.396 ] 00:17:14.396 }' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.396 [2024-11-28 18:57:43.947879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.396 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.396 "name": "raid_bdev1", 00:17:14.396 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:14.396 "strip_size_kb": 0, 00:17:14.396 "state": "online", 00:17:14.396 "raid_level": "raid1", 00:17:14.397 "superblock": true, 00:17:14.397 "num_base_bdevs": 2, 00:17:14.397 "num_base_bdevs_discovered": 1, 00:17:14.397 "num_base_bdevs_operational": 1, 00:17:14.397 "base_bdevs_list": [ 00:17:14.397 { 00:17:14.397 "name": null, 00:17:14.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.397 "is_configured": false, 00:17:14.397 "data_offset": 0, 00:17:14.397 "data_size": 7936 00:17:14.397 }, 00:17:14.397 { 00:17:14.397 "name": "BaseBdev2", 00:17:14.397 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:14.397 "is_configured": true, 00:17:14.397 "data_offset": 256, 00:17:14.397 "data_size": 7936 00:17:14.397 } 00:17:14.397 ] 00:17:14.397 }' 00:17:14.397 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.397 18:57:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.966 18:57:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.966 18:57:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.966 18:57:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.966 [2024-11-28 18:57:44.412038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.966 [2024-11-28 18:57:44.412268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.966 [2024-11-28 18:57:44.412331] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.966 [2024-11-28 18:57:44.412389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.966 [2024-11-28 18:57:44.416102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:14.966 18:57:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.966 18:57:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.966 [2024-11-28 18:57:44.417976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.905 "name": "raid_bdev1", 00:17:15.905 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:15.905 "strip_size_kb": 0, 00:17:15.905 "state": "online", 00:17:15.905 "raid_level": "raid1", 00:17:15.905 "superblock": true, 00:17:15.905 "num_base_bdevs": 2, 00:17:15.905 "num_base_bdevs_discovered": 2, 00:17:15.905 "num_base_bdevs_operational": 2, 00:17:15.905 "process": { 00:17:15.905 "type": "rebuild", 00:17:15.905 "target": "spare", 00:17:15.905 "progress": { 00:17:15.905 "blocks": 2560, 00:17:15.905 "percent": 32 00:17:15.905 } 00:17:15.905 }, 00:17:15.905 "base_bdevs_list": [ 00:17:15.905 { 00:17:15.905 "name": "spare", 00:17:15.905 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:15.905 "is_configured": true, 00:17:15.905 "data_offset": 256, 00:17:15.905 "data_size": 7936 00:17:15.905 }, 00:17:15.905 { 00:17:15.905 "name": "BaseBdev2", 00:17:15.905 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:15.905 "is_configured": true, 00:17:15.905 "data_offset": 256, 00:17:15.905 "data_size": 7936 00:17:15.905 } 00:17:15.905 ] 00:17:15.905 }' 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.905 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.165 [2024-11-28 18:57:45.555357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.165 [2024-11-28 18:57:45.624216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.165 [2024-11-28 18:57:45.624271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.165 [2024-11-28 18:57:45.624285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.165 [2024-11-28 18:57:45.624294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.165 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.166 "name": "raid_bdev1", 00:17:16.166 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:16.166 "strip_size_kb": 0, 00:17:16.166 "state": "online", 00:17:16.166 "raid_level": "raid1", 00:17:16.166 "superblock": true, 00:17:16.166 "num_base_bdevs": 2, 00:17:16.166 "num_base_bdevs_discovered": 1, 00:17:16.166 "num_base_bdevs_operational": 1, 00:17:16.166 "base_bdevs_list": [ 00:17:16.166 { 00:17:16.166 "name": null, 00:17:16.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.166 "is_configured": false, 00:17:16.166 "data_offset": 0, 00:17:16.166 "data_size": 7936 00:17:16.166 }, 00:17:16.166 { 00:17:16.166 "name": "BaseBdev2", 00:17:16.166 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:16.166 "is_configured": true, 00:17:16.166 "data_offset": 256, 00:17:16.166 "data_size": 7936 00:17:16.166 } 00:17:16.166 ] 00:17:16.166 }' 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.166 18:57:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.736 18:57:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.736 18:57:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.736 18:57:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.736 [2024-11-28 18:57:46.064253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.736 [2024-11-28 18:57:46.064375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.736 [2024-11-28 18:57:46.064423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:16.736 [2024-11-28 18:57:46.064470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.736 [2024-11-28 18:57:46.064659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.736 [2024-11-28 18:57:46.064709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.736 [2024-11-28 18:57:46.064787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.736 [2024-11-28 18:57:46.064826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.736 [2024-11-28 18:57:46.064871] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.736 [2024-11-28 18:57:46.064950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.736 [2024-11-28 18:57:46.067965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:17:16.736 [2024-11-28 18:57:46.069886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.736 spare 00:17:16.736 18:57:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.736 18:57:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.677 "name": "raid_bdev1", 00:17:17.677 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:17.677 "strip_size_kb": 0, 00:17:17.677 "state": "online", 00:17:17.677 "raid_level": "raid1", 00:17:17.677 "superblock": true, 00:17:17.677 "num_base_bdevs": 2, 00:17:17.677 "num_base_bdevs_discovered": 2, 00:17:17.677 "num_base_bdevs_operational": 2, 00:17:17.677 "process": { 00:17:17.677 "type": "rebuild", 00:17:17.677 "target": "spare", 00:17:17.677 "progress": { 00:17:17.677 "blocks": 2560, 00:17:17.677 "percent": 32 00:17:17.677 } 00:17:17.677 }, 00:17:17.677 "base_bdevs_list": [ 00:17:17.677 { 00:17:17.677 "name": "spare", 00:17:17.677 "uuid": "f199d7ba-6bc6-56cc-9ebf-a67c8998e9e9", 00:17:17.677 "is_configured": true, 00:17:17.677 "data_offset": 256, 00:17:17.677 "data_size": 7936 00:17:17.677 }, 00:17:17.677 { 00:17:17.677 "name": "BaseBdev2", 00:17:17.677 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:17.677 "is_configured": true, 00:17:17.677 "data_offset": 256, 00:17:17.677 "data_size": 7936 00:17:17.677 } 00:17:17.677 ] 00:17:17.677 }' 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.677 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.677 [2024-11-28 18:57:47.234836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.677 [2024-11-28 18:57:47.276051] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.677 [2024-11-28 18:57:47.276161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.677 [2024-11-28 18:57:47.276201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.677 [2024-11-28 18:57:47.276222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.937 "name": "raid_bdev1", 00:17:17.937 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:17.937 "strip_size_kb": 0, 00:17:17.937 "state": "online", 00:17:17.937 "raid_level": "raid1", 00:17:17.937 "superblock": true, 00:17:17.937 "num_base_bdevs": 2, 00:17:17.937 "num_base_bdevs_discovered": 1, 00:17:17.937 "num_base_bdevs_operational": 1, 00:17:17.937 "base_bdevs_list": [ 00:17:17.937 { 00:17:17.937 "name": null, 00:17:17.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.937 "is_configured": false, 00:17:17.937 "data_offset": 0, 00:17:17.937 "data_size": 7936 00:17:17.937 }, 00:17:17.937 { 00:17:17.937 "name": "BaseBdev2", 00:17:17.937 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:17.937 "is_configured": true, 00:17:17.937 "data_offset": 256, 00:17:17.937 "data_size": 7936 00:17:17.937 } 00:17:17.937 ] 00:17:17.937 }' 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.937 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.197 "name": "raid_bdev1", 00:17:18.197 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:18.197 "strip_size_kb": 0, 00:17:18.197 "state": "online", 00:17:18.197 "raid_level": "raid1", 00:17:18.197 "superblock": true, 00:17:18.197 "num_base_bdevs": 2, 00:17:18.197 "num_base_bdevs_discovered": 1, 00:17:18.197 "num_base_bdevs_operational": 1, 00:17:18.197 "base_bdevs_list": [ 00:17:18.197 { 00:17:18.197 "name": null, 00:17:18.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.197 "is_configured": false, 00:17:18.197 "data_offset": 0, 00:17:18.197 "data_size": 7936 00:17:18.197 }, 00:17:18.197 { 00:17:18.197 "name": "BaseBdev2", 00:17:18.197 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:18.197 "is_configured": true, 00:17:18.197 "data_offset": 256, 00:17:18.197 "data_size": 7936 00:17:18.197 } 00:17:18.197 ] 00:17:18.197 }' 00:17:18.197 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 [2024-11-28 18:57:47.872308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.457 [2024-11-28 18:57:47.872366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.457 [2024-11-28 18:57:47.872388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.457 [2024-11-28 18:57:47.872397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.457 [2024-11-28 18:57:47.872570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.457 [2024-11-28 18:57:47.872586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.457 [2024-11-28 18:57:47.872634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.457 [2024-11-28 18:57:47.872645] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.457 [2024-11-28 18:57:47.872659] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.457 [2024-11-28 18:57:47.872679] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.457 BaseBdev1 00:17:18.457 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.458 18:57:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.398 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.398 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.398 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.398 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.398 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.399 "name": "raid_bdev1", 00:17:19.399 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:19.399 "strip_size_kb": 0, 00:17:19.399 "state": "online", 00:17:19.399 "raid_level": "raid1", 00:17:19.399 "superblock": true, 00:17:19.399 "num_base_bdevs": 2, 00:17:19.399 "num_base_bdevs_discovered": 1, 00:17:19.399 "num_base_bdevs_operational": 1, 00:17:19.399 "base_bdevs_list": [ 00:17:19.399 { 00:17:19.399 "name": null, 00:17:19.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.399 "is_configured": false, 00:17:19.399 "data_offset": 0, 00:17:19.399 "data_size": 7936 00:17:19.399 }, 00:17:19.399 { 00:17:19.399 "name": "BaseBdev2", 00:17:19.399 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:19.399 "is_configured": true, 00:17:19.399 "data_offset": 256, 00:17:19.399 "data_size": 7936 00:17:19.399 } 00:17:19.399 ] 00:17:19.399 }' 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.399 18:57:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.969 "name": "raid_bdev1", 00:17:19.969 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:19.969 "strip_size_kb": 0, 00:17:19.969 "state": "online", 00:17:19.969 "raid_level": "raid1", 00:17:19.969 "superblock": true, 00:17:19.969 "num_base_bdevs": 2, 00:17:19.969 "num_base_bdevs_discovered": 1, 00:17:19.969 "num_base_bdevs_operational": 1, 00:17:19.969 "base_bdevs_list": [ 00:17:19.969 { 00:17:19.969 "name": null, 00:17:19.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.969 "is_configured": false, 00:17:19.969 "data_offset": 0, 00:17:19.969 "data_size": 7936 00:17:19.969 }, 00:17:19.969 { 00:17:19.969 "name": "BaseBdev2", 00:17:19.969 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:19.969 "is_configured": true, 00:17:19.969 "data_offset": 256, 00:17:19.969 "data_size": 7936 00:17:19.969 } 00:17:19.969 ] 00:17:19.969 }' 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 [2024-11-28 18:57:49.508767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.969 [2024-11-28 18:57:49.508929] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.969 [2024-11-28 18:57:49.508944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.969 request: 00:17:19.969 { 00:17:19.969 "base_bdev": "BaseBdev1", 00:17:19.969 "raid_bdev": "raid_bdev1", 00:17:19.969 "method": "bdev_raid_add_base_bdev", 00:17:19.969 "req_id": 1 00:17:19.969 } 00:17:19.969 Got JSON-RPC error response 00:17:19.969 response: 00:17:19.969 { 00:17:19.969 "code": -22, 00:17:19.969 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.969 } 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.969 18:57:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.012 "name": "raid_bdev1", 00:17:21.012 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:21.012 "strip_size_kb": 0, 00:17:21.012 "state": "online", 00:17:21.012 "raid_level": "raid1", 00:17:21.012 "superblock": true, 00:17:21.012 "num_base_bdevs": 2, 00:17:21.012 "num_base_bdevs_discovered": 1, 00:17:21.012 "num_base_bdevs_operational": 1, 00:17:21.012 "base_bdevs_list": [ 00:17:21.012 { 00:17:21.012 "name": null, 00:17:21.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.012 "is_configured": false, 00:17:21.012 "data_offset": 0, 00:17:21.012 "data_size": 7936 00:17:21.012 }, 00:17:21.012 { 00:17:21.012 "name": "BaseBdev2", 00:17:21.012 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:21.012 "is_configured": true, 00:17:21.012 "data_offset": 256, 00:17:21.012 "data_size": 7936 00:17:21.012 } 00:17:21.012 ] 00:17:21.012 }' 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.012 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.596 18:57:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.596 "name": "raid_bdev1", 00:17:21.596 "uuid": "b46b9024-196a-43f2-b807-5d038e52da9f", 00:17:21.596 "strip_size_kb": 0, 00:17:21.596 "state": "online", 00:17:21.596 "raid_level": "raid1", 00:17:21.596 "superblock": true, 00:17:21.596 "num_base_bdevs": 2, 00:17:21.596 "num_base_bdevs_discovered": 1, 00:17:21.596 "num_base_bdevs_operational": 1, 00:17:21.596 "base_bdevs_list": [ 00:17:21.596 { 00:17:21.596 "name": null, 00:17:21.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.596 "is_configured": false, 00:17:21.596 "data_offset": 0, 00:17:21.596 "data_size": 7936 00:17:21.596 }, 00:17:21.596 { 00:17:21.596 "name": "BaseBdev2", 00:17:21.596 "uuid": "b90bc70a-3c78-5167-8498-665aa731b3a0", 00:17:21.596 "is_configured": true, 00:17:21.596 "data_offset": 256, 00:17:21.596 "data_size": 7936 00:17:21.596 } 00:17:21.596 ] 00:17:21.596 }' 00:17:21.596 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.596 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.596 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.596 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 100824 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100824 ']' 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100824 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100824 00:17:21.597 killing process with pid 100824 00:17:21.597 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.597 00:17:21.597 Latency(us) 00:17:21.597 [2024-11-28T18:57:51.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.597 [2024-11-28T18:57:51.203Z] =================================================================================================================== 00:17:21.597 [2024-11-28T18:57:51.203Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100824' 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100824 00:17:21.597 [2024-11-28 18:57:51.137957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.597 [2024-11-28 18:57:51.138078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.597 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100824 00:17:21.597 [2024-11-28 18:57:51.138126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.597 [2024-11-28 18:57:51.138138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:21.597 [2024-11-28 18:57:51.171807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.856 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:21.856 00:17:21.856 real 0m16.271s 00:17:21.856 user 0m21.814s 00:17:21.856 sys 0m1.717s 00:17:21.856 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.856 18:57:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:21.856 ************************************ 00:17:21.856 END TEST raid_rebuild_test_sb_md_interleaved 00:17:21.856 ************************************ 00:17:21.856 18:57:51 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:21.856 18:57:51 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:21.856 18:57:51 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 100824 ']' 00:17:21.856 18:57:51 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 100824 00:17:22.116 18:57:51 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:22.116 00:17:22.116 real 9m53.493s 00:17:22.116 user 14m2.667s 00:17:22.116 sys 1m48.401s 00:17:22.116 ************************************ 00:17:22.116 END TEST bdev_raid 00:17:22.116 ************************************ 00:17:22.116 18:57:51 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.116 18:57:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.116 18:57:51 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:22.116 18:57:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:22.116 18:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.116 18:57:51 -- common/autotest_common.sh@10 -- # set +x 00:17:22.116 ************************************ 00:17:22.116 START TEST spdkcli_raid 00:17:22.116 ************************************ 00:17:22.116 18:57:51 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:22.116 * Looking for test storage... 00:17:22.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:22.116 18:57:51 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.116 18:57:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.116 18:57:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.377 18:57:51 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.377 --rc genhtml_branch_coverage=1 00:17:22.377 --rc genhtml_function_coverage=1 00:17:22.377 --rc genhtml_legend=1 00:17:22.377 --rc geninfo_all_blocks=1 00:17:22.377 --rc geninfo_unexecuted_blocks=1 00:17:22.377 00:17:22.377 ' 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.377 --rc genhtml_branch_coverage=1 00:17:22.377 --rc genhtml_function_coverage=1 00:17:22.377 --rc genhtml_legend=1 00:17:22.377 --rc geninfo_all_blocks=1 00:17:22.377 --rc geninfo_unexecuted_blocks=1 00:17:22.377 00:17:22.377 ' 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.377 --rc genhtml_branch_coverage=1 00:17:22.377 --rc genhtml_function_coverage=1 00:17:22.377 --rc genhtml_legend=1 00:17:22.377 --rc geninfo_all_blocks=1 00:17:22.377 --rc geninfo_unexecuted_blocks=1 00:17:22.377 00:17:22.377 ' 00:17:22.377 18:57:51 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.377 --rc genhtml_branch_coverage=1 00:17:22.377 --rc genhtml_function_coverage=1 00:17:22.377 --rc genhtml_legend=1 00:17:22.377 --rc geninfo_all_blocks=1 00:17:22.377 --rc geninfo_unexecuted_blocks=1 00:17:22.377 00:17:22.377 ' 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:22.377 18:57:51 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:22.377 18:57:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101491 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:22.378 18:57:51 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101491 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 101491 ']' 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.378 18:57:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.378 [2024-11-28 18:57:51.916101] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:22.378 [2024-11-28 18:57:51.916265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101491 ] 00:17:22.638 [2024-11-28 18:57:52.051821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:22.638 [2024-11-28 18:57:52.091755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.638 [2024-11-28 18:57:52.120449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.638 [2024-11-28 18:57:52.120552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:23.208 18:57:52 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.208 18:57:52 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.208 18:57:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.208 18:57:52 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:23.208 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:23.208 ' 00:17:25.118 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:25.118 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:25.118 18:57:54 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:25.118 18:57:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.118 18:57:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.118 18:57:54 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:25.118 18:57:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.118 18:57:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.118 18:57:54 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:25.118 ' 00:17:26.054 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:26.054 18:57:55 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:26.054 18:57:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.054 18:57:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 18:57:55 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:26.312 18:57:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.312 18:57:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 18:57:55 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:26.312 18:57:55 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:26.880 18:57:56 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:26.880 18:57:56 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:26.880 18:57:56 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:26.880 18:57:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.880 18:57:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.880 18:57:56 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:26.880 18:57:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.880 18:57:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.880 18:57:56 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:26.880 ' 00:17:27.818 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:27.818 18:57:57 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:27.818 18:57:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.818 18:57:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.077 18:57:57 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:28.077 18:57:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.077 18:57:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.077 18:57:57 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:28.077 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:28.077 ' 00:17:29.457 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:29.457 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:29.457 18:57:58 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.457 18:57:58 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101491 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101491 ']' 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101491 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101491 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101491' 00:17:29.457 killing process with pid 101491 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 101491 00:17:29.457 18:57:58 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 101491 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101491 ']' 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101491 00:17:30.026 18:57:59 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101491 ']' 00:17:30.026 18:57:59 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101491 00:17:30.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101491) - No such process 00:17:30.026 18:57:59 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 101491 is not found' 00:17:30.026 Process with pid 101491 is not found 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:30.026 18:57:59 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:30.026 00:17:30.026 real 0m8.053s 00:17:30.026 user 0m16.924s 00:17:30.026 sys 0m1.151s 00:17:30.026 ************************************ 00:17:30.026 END TEST spdkcli_raid 00:17:30.026 ************************************ 00:17:30.026 18:57:59 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.026 18:57:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.285 18:57:59 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:30.285 18:57:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.285 18:57:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.285 18:57:59 -- common/autotest_common.sh@10 -- # set +x 00:17:30.285 ************************************ 00:17:30.285 START TEST blockdev_raid5f 00:17:30.285 ************************************ 00:17:30.285 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:30.285 * Looking for test storage... 00:17:30.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:30.285 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.285 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.285 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.544 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.544 18:57:59 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.544 18:57:59 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.544 18:57:59 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.544 18:57:59 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.545 18:57:59 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.545 --rc genhtml_branch_coverage=1 00:17:30.545 --rc genhtml_function_coverage=1 00:17:30.545 --rc genhtml_legend=1 00:17:30.545 --rc geninfo_all_blocks=1 00:17:30.545 --rc geninfo_unexecuted_blocks=1 00:17:30.545 00:17:30.545 ' 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.545 --rc genhtml_branch_coverage=1 00:17:30.545 --rc genhtml_function_coverage=1 00:17:30.545 --rc genhtml_legend=1 00:17:30.545 --rc geninfo_all_blocks=1 00:17:30.545 --rc geninfo_unexecuted_blocks=1 00:17:30.545 00:17:30.545 ' 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.545 --rc genhtml_branch_coverage=1 00:17:30.545 --rc genhtml_function_coverage=1 00:17:30.545 --rc genhtml_legend=1 00:17:30.545 --rc geninfo_all_blocks=1 00:17:30.545 --rc geninfo_unexecuted_blocks=1 00:17:30.545 00:17:30.545 ' 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.545 --rc genhtml_branch_coverage=1 00:17:30.545 --rc genhtml_function_coverage=1 00:17:30.545 --rc genhtml_legend=1 00:17:30.545 --rc geninfo_all_blocks=1 00:17:30.545 --rc geninfo_unexecuted_blocks=1 00:17:30.545 00:17:30.545 ' 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=101756 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:30.545 18:57:59 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 101756 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 101756 ']' 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.545 18:57:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:30.545 [2024-11-28 18:58:00.042241] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:30.545 [2024-11-28 18:58:00.042450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101756 ] 00:17:30.806 [2024-11-28 18:58:00.182019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.806 [2024-11-28 18:58:00.222461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.806 [2024-11-28 18:58:00.266786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.375 Malloc0 00:17:31.375 Malloc1 00:17:31.375 Malloc2 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.375 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.375 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.635 18:58:00 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.635 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:31.635 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:31.635 18:58:00 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:31.635 18:58:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.635 18:58:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8d9235eb-5af1-4229-bcf7-16737396782f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8d9235eb-5af1-4229-bcf7-16737396782f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8d9235eb-5af1-4229-bcf7-16737396782f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "679a8bcf-7865-4c9d-bebf-cde4009560c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f2a185f-b861-4c00-bde5-f0c44c3d5a42",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e5906c6a-8f49-4148-868b-1b9709a60844",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:31.635 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 101756 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 101756 ']' 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 101756 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101756 00:17:31.635 killing process with pid 101756 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101756' 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 101756 00:17:31.635 18:58:01 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 101756 00:17:32.575 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:32.575 18:58:01 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:32.575 18:58:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:32.575 18:58:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.575 18:58:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:32.575 ************************************ 00:17:32.575 START TEST bdev_hello_world 00:17:32.575 ************************************ 00:17:32.575 18:58:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:32.575 [2024-11-28 18:58:01.914151] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:32.575 [2024-11-28 18:58:01.914274] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101796 ] 00:17:32.575 [2024-11-28 18:58:02.053488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:32.575 [2024-11-28 18:58:02.091375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.575 [2024-11-28 18:58:02.138089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.834 [2024-11-28 18:58:02.386933] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:32.834 [2024-11-28 18:58:02.386993] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:32.834 [2024-11-28 18:58:02.387012] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:32.834 [2024-11-28 18:58:02.387361] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:32.834 [2024-11-28 18:58:02.387530] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:32.834 [2024-11-28 18:58:02.387553] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:32.834 [2024-11-28 18:58:02.387658] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:32.834 00:17:32.834 [2024-11-28 18:58:02.387677] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:33.403 ************************************ 00:17:33.403 END TEST bdev_hello_world 00:17:33.403 ************************************ 00:17:33.403 00:17:33.403 real 0m0.941s 00:17:33.403 user 0m0.519s 00:17:33.403 sys 0m0.314s 00:17:33.403 18:58:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.403 18:58:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:33.403 18:58:02 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:33.403 18:58:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.403 18:58:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.403 18:58:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:33.403 ************************************ 00:17:33.403 START TEST bdev_bounds 00:17:33.403 ************************************ 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=101827 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 101827' 00:17:33.403 Process bdevio pid: 101827 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 101827 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 101827 ']' 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.403 18:58:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:33.403 [2024-11-28 18:58:02.933694] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:33.403 [2024-11-28 18:58:02.933964] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101827 ] 00:17:33.662 [2024-11-28 18:58:03.075896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:33.662 [2024-11-28 18:58:03.107925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:33.662 [2024-11-28 18:58:03.151494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.662 [2024-11-28 18:58:03.151690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.662 [2024-11-28 18:58:03.151765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.229 18:58:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.229 18:58:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:34.229 18:58:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:34.488 I/O targets: 00:17:34.488 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:34.488 00:17:34.488 00:17:34.488 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.488 http://cunit.sourceforge.net/ 00:17:34.488 00:17:34.488 00:17:34.488 Suite: bdevio tests on: raid5f 00:17:34.488 Test: blockdev write read block ...passed 00:17:34.488 Test: blockdev write zeroes read block ...passed 00:17:34.488 Test: blockdev write zeroes read no split ...passed 00:17:34.488 Test: blockdev write zeroes read split ...passed 00:17:34.488 Test: blockdev write zeroes read split partial ...passed 00:17:34.488 Test: blockdev reset ...passed 00:17:34.488 Test: blockdev write read 8 blocks ...passed 00:17:34.488 Test: blockdev write read size > 128k ...passed 00:17:34.488 Test: blockdev write read invalid size ...passed 00:17:34.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:34.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:34.488 Test: blockdev write read max offset ...passed 00:17:34.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:34.488 Test: blockdev writev readv 8 blocks ...passed 00:17:34.488 Test: blockdev writev readv 30 x 1block ...passed 00:17:34.488 Test: blockdev writev readv block ...passed 00:17:34.488 Test: blockdev writev readv size > 128k ...passed 00:17:34.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:34.488 Test: blockdev comparev and writev ...passed 00:17:34.488 Test: blockdev nvme passthru rw ...passed 00:17:34.488 Test: blockdev nvme passthru vendor specific ...passed 00:17:34.488 Test: blockdev nvme admin passthru ...passed 00:17:34.488 Test: blockdev copy ...passed 00:17:34.488 00:17:34.488 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.488 suites 1 1 n/a 0 0 00:17:34.488 tests 23 23 23 0 0 00:17:34.488 asserts 130 130 130 0 n/a 00:17:34.488 00:17:34.488 Elapsed time = 0.312 seconds 00:17:34.488 0 00:17:34.488 18:58:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 101827 00:17:34.488 18:58:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 101827 ']' 00:17:34.488 18:58:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 101827 00:17:34.488 18:58:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101827 00:17:34.488 killing process with pid 101827 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101827' 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 101827 00:17:34.488 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 101827 00:17:35.057 ************************************ 00:17:35.057 END TEST bdev_bounds 00:17:35.057 ************************************ 00:17:35.057 18:58:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:35.057 00:17:35.057 real 0m1.589s 00:17:35.057 user 0m3.694s 00:17:35.057 sys 0m0.458s 00:17:35.057 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.057 18:58:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 18:58:04 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:35.057 18:58:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:35.057 18:58:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.057 18:58:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 ************************************ 00:17:35.057 START TEST bdev_nbd 00:17:35.057 ************************************ 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:35.057 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101875 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101875 /var/tmp/spdk-nbd.sock 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 101875 ']' 00:17:35.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.058 18:58:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:35.058 [2024-11-28 18:58:04.603993] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:35.058 [2024-11-28 18:58:04.604239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.318 [2024-11-28 18:58:04.746749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:35.318 [2024-11-28 18:58:04.784872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.318 [2024-11-28 18:58:04.826068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:35.886 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:35.887 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.146 1+0 records in 00:17:36.146 1+0 records out 00:17:36.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362049 s, 11.3 MB/s 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:36.146 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:36.406 { 00:17:36.406 "nbd_device": "/dev/nbd0", 00:17:36.406 "bdev_name": "raid5f" 00:17:36.406 } 00:17:36.406 ]' 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:36.406 { 00:17:36.406 "nbd_device": "/dev/nbd0", 00:17:36.406 "bdev_name": "raid5f" 00:17:36.406 } 00:17:36.406 ]' 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:36.406 18:58:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:36.666 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.926 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:37.186 /dev/nbd0 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.186 1+0 records in 00:17:37.186 1+0 records out 00:17:37.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578243 s, 7.1 MB/s 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:37.186 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:37.446 { 00:17:37.446 "nbd_device": "/dev/nbd0", 00:17:37.446 "bdev_name": "raid5f" 00:17:37.446 } 00:17:37.446 ]' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:37.446 { 00:17:37.446 "nbd_device": "/dev/nbd0", 00:17:37.446 "bdev_name": "raid5f" 00:17:37.446 } 00:17:37.446 ]' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:37.446 256+0 records in 00:17:37.446 256+0 records out 00:17:37.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138189 s, 75.9 MB/s 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:37.446 256+0 records in 00:17:37.446 256+0 records out 00:17:37.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329318 s, 31.8 MB/s 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:37.446 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:37.447 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:37.447 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.447 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:37.447 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.447 18:58:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:37.707 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:37.967 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:38.227 malloc_lvol_verify 00:17:38.227 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:38.487 0886b0ed-3079-436b-9733-fa960e8a2d4f 00:17:38.487 18:58:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:38.487 29270b1f-0fbd-4967-a3b6-4f6d00f54170 00:17:38.487 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:38.747 /dev/nbd0 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:38.747 mke2fs 1.47.0 (5-Feb-2023) 00:17:38.747 Discarding device blocks: 0/4096 done 00:17:38.747 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:38.747 00:17:38.747 Allocating group tables: 0/1 done 00:17:38.747 Writing inode tables: 0/1 done 00:17:38.747 Creating journal (1024 blocks): done 00:17:38.747 Writing superblocks and filesystem accounting information: 0/1 done 00:17:38.747 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.747 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101875 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 101875 ']' 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 101875 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101875 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.007 killing process with pid 101875 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101875' 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 101875 00:17:39.007 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 101875 00:17:39.577 ************************************ 00:17:39.577 END TEST bdev_nbd 00:17:39.577 ************************************ 00:17:39.577 18:58:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:39.577 00:17:39.577 real 0m4.450s 00:17:39.577 user 0m6.249s 00:17:39.577 sys 0m1.384s 00:17:39.577 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.577 18:58:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:39.577 18:58:09 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:39.577 18:58:09 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:17:39.577 18:58:09 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:17:39.577 18:58:09 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:39.577 18:58:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.577 18:58:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.577 18:58:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:39.577 ************************************ 00:17:39.577 START TEST bdev_fio 00:17:39.577 ************************************ 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:39.577 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:39.577 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:39.838 ************************************ 00:17:39.838 START TEST bdev_fio_rw_verify 00:17:39.838 ************************************ 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:39.838 18:58:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:40.098 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:40.098 fio-3.35 00:17:40.098 Starting 1 thread 00:17:52.318 00:17:52.318 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102064: Thu Nov 28 18:58:20 2024 00:17:52.318 read: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(489MiB/10001msec) 00:17:52.318 slat (nsec): min=17702, max=62833, avg=19080.51, stdev=1691.57 00:17:52.318 clat (usec): min=11, max=310, avg=129.44, stdev=45.04 00:17:52.318 lat (usec): min=30, max=339, avg=148.52, stdev=45.24 00:17:52.318 clat percentiles (usec): 00:17:52.318 | 50.000th=[ 135], 99.000th=[ 210], 99.900th=[ 233], 99.990th=[ 273], 00:17:52.318 | 99.999th=[ 302] 00:17:52.318 write: IOPS=13.1k, BW=51.2MiB/s (53.6MB/s)(505MiB/9876msec); 0 zone resets 00:17:52.318 slat (usec): min=7, max=268, avg=15.95, stdev= 3.61 00:17:52.318 clat (usec): min=57, max=1670, avg=293.83, stdev=42.19 00:17:52.318 lat (usec): min=72, max=1938, avg=309.78, stdev=43.41 00:17:52.318 clat percentiles (usec): 00:17:52.318 | 50.000th=[ 297], 99.000th=[ 371], 99.900th=[ 611], 99.990th=[ 1385], 00:17:52.318 | 99.999th=[ 1598] 00:17:52.318 bw ( KiB/s): min=48880, max=54792, per=98.86%, avg=51794.11, stdev=1537.86, samples=19 00:17:52.318 iops : min=12220, max=13698, avg=12948.53, stdev=384.47, samples=19 00:17:52.318 lat (usec) : 20=0.01%, 50=0.01%, 100=16.66%, 250=39.30%, 500=43.96% 00:17:52.318 lat (usec) : 750=0.04%, 1000=0.02% 00:17:52.318 lat (msec) : 2=0.01% 00:17:52.318 cpu : usr=98.90%, sys=0.40%, ctx=30, majf=0, minf=13307 00:17:52.318 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.318 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.318 issued rwts: total=125234,129354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:52.318 00:17:52.318 Run status group 0 (all jobs): 00:17:52.318 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=489MiB (513MB), run=10001-10001msec 00:17:52.318 WRITE: bw=51.2MiB/s (53.6MB/s), 51.2MiB/s-51.2MiB/s (53.6MB/s-53.6MB/s), io=505MiB (530MB), run=9876-9876msec 00:17:52.318 ----------------------------------------------------- 00:17:52.318 Suppressions used: 00:17:52.318 count bytes template 00:17:52.318 1 7 /usr/src/fio/parse.c 00:17:52.318 235 22560 /usr/src/fio/iolog.c 00:17:52.318 1 8 libtcmalloc_minimal.so 00:17:52.318 1 904 libcrypto.so 00:17:52.318 ----------------------------------------------------- 00:17:52.318 00:17:52.318 ************************************ 00:17:52.318 END TEST bdev_fio_rw_verify 00:17:52.318 ************************************ 00:17:52.318 00:17:52.318 real 0m11.480s 00:17:52.318 user 0m11.662s 00:17:52.318 sys 0m0.699s 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8d9235eb-5af1-4229-bcf7-16737396782f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8d9235eb-5af1-4229-bcf7-16737396782f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8d9235eb-5af1-4229-bcf7-16737396782f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "679a8bcf-7865-4c9d-bebf-cde4009560c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f2a185f-b861-4c00-bde5-f0c44c3d5a42",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e5906c6a-8f49-4148-868b-1b9709a60844",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:52.318 /home/vagrant/spdk_repo/spdk 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:52.318 00:17:52.318 real 0m11.793s 00:17:52.318 user 0m11.785s 00:17:52.318 sys 0m0.855s 00:17:52.318 ************************************ 00:17:52.318 END TEST bdev_fio 00:17:52.318 ************************************ 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.318 18:58:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:52.318 18:58:20 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:52.318 18:58:20 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:52.318 18:58:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:52.318 18:58:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.318 18:58:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:52.318 ************************************ 00:17:52.318 START TEST bdev_verify 00:17:52.318 ************************************ 00:17:52.319 18:58:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:52.319 [2024-11-28 18:58:20.985040] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:52.319 [2024-11-28 18:58:20.985151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102216 ] 00:17:52.319 [2024-11-28 18:58:21.122157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:52.319 [2024-11-28 18:58:21.160122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.319 [2024-11-28 18:58:21.213101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.319 [2024-11-28 18:58:21.213189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.319 Running I/O for 5 seconds... 00:17:54.196 10828.00 IOPS, 42.30 MiB/s [2024-11-28T18:58:24.740Z] 10842.00 IOPS, 42.35 MiB/s [2024-11-28T18:58:25.677Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-28T18:58:26.616Z] 10899.50 IOPS, 42.58 MiB/s [2024-11-28T18:58:26.616Z] 10890.80 IOPS, 42.54 MiB/s 00:17:57.010 Latency(us) 00:17:57.010 [2024-11-28T18:58:26.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.010 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:57.010 Verification LBA range: start 0x0 length 0x2000 00:17:57.010 raid5f : 5.02 6594.05 25.76 0.00 0.00 29211.03 103.53 21020.87 00:17:57.010 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:57.010 Verification LBA range: start 0x2000 length 0x2000 00:17:57.010 raid5f : 5.02 4279.78 16.72 0.00 0.00 44889.37 260.62 31759.80 00:17:57.010 [2024-11-28T18:58:26.616Z] =================================================================================================================== 00:17:57.010 [2024-11-28T18:58:26.616Z] Total : 10873.84 42.48 0.00 0.00 35380.19 103.53 31759.80 00:17:57.269 00:17:57.270 real 0m5.978s 00:17:57.270 user 0m11.018s 00:17:57.270 sys 0m0.335s 00:17:57.270 18:58:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.270 18:58:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:57.270 ************************************ 00:17:57.270 END TEST bdev_verify 00:17:57.530 ************************************ 00:17:57.530 18:58:26 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:57.530 18:58:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:57.530 18:58:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.530 18:58:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:57.530 ************************************ 00:17:57.530 START TEST bdev_verify_big_io 00:17:57.530 ************************************ 00:17:57.530 18:58:26 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:57.530 [2024-11-28 18:58:27.036558] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:57.530 [2024-11-28 18:58:27.036747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102302 ] 00:17:57.789 [2024-11-28 18:58:27.176000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.789 [2024-11-28 18:58:27.213536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:57.789 [2024-11-28 18:58:27.257406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.789 [2024-11-28 18:58:27.257468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.048 Running I/O for 5 seconds... 00:17:59.998 633.00 IOPS, 39.56 MiB/s [2024-11-28T18:58:30.985Z] 761.00 IOPS, 47.56 MiB/s [2024-11-28T18:58:31.925Z] 782.00 IOPS, 48.88 MiB/s [2024-11-28T18:58:32.865Z] 792.75 IOPS, 49.55 MiB/s [2024-11-28T18:58:32.865Z] 799.00 IOPS, 49.94 MiB/s 00:18:03.259 Latency(us) 00:18:03.259 [2024-11-28T18:58:32.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.259 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:03.259 Verification LBA range: start 0x0 length 0x200 00:18:03.259 raid5f : 5.28 456.77 28.55 0.00 0.00 7028144.18 195.46 303431.74 00:18:03.259 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:03.259 Verification LBA range: start 0x200 length 0x200 00:18:03.259 raid5f : 5.29 347.75 21.73 0.00 0.00 8998794.25 182.97 383859.43 00:18:03.259 [2024-11-28T18:58:32.865Z] =================================================================================================================== 00:18:03.259 [2024-11-28T18:58:32.865Z] Total : 804.51 50.28 0.00 0.00 7881119.04 182.97 383859.43 00:18:03.829 00:18:03.829 real 0m6.241s 00:18:03.829 user 0m11.558s 00:18:03.829 sys 0m0.322s 00:18:03.829 ************************************ 00:18:03.829 END TEST bdev_verify_big_io 00:18:03.829 ************************************ 00:18:03.829 18:58:33 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.829 18:58:33 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.829 18:58:33 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:03.829 18:58:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:03.829 18:58:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.829 18:58:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:03.829 ************************************ 00:18:03.829 START TEST bdev_write_zeroes 00:18:03.829 ************************************ 00:18:03.829 18:58:33 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:03.829 [2024-11-28 18:58:33.359239] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:03.829 [2024-11-28 18:58:33.359370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102385 ] 00:18:04.088 [2024-11-28 18:58:33.498424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:04.088 [2024-11-28 18:58:33.537850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.088 [2024-11-28 18:58:33.584146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.348 Running I/O for 1 seconds... 00:18:05.296 29655.00 IOPS, 115.84 MiB/s 00:18:05.296 Latency(us) 00:18:05.296 [2024-11-28T18:58:34.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.296 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:05.296 raid5f : 1.01 29620.44 115.70 0.00 0.00 4309.49 1485.17 5940.68 00:18:05.296 [2024-11-28T18:58:34.902Z] =================================================================================================================== 00:18:05.296 [2024-11-28T18:58:34.902Z] Total : 29620.44 115.70 0.00 0.00 4309.49 1485.17 5940.68 00:18:05.891 00:18:05.892 real 0m1.945s 00:18:05.892 user 0m1.508s 00:18:05.892 sys 0m0.323s 00:18:05.892 ************************************ 00:18:05.892 END TEST bdev_write_zeroes 00:18:05.892 ************************************ 00:18:05.892 18:58:35 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.892 18:58:35 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:05.892 18:58:35 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.892 18:58:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:05.892 18:58:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.892 18:58:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:05.892 ************************************ 00:18:05.892 START TEST bdev_json_nonenclosed 00:18:05.892 ************************************ 00:18:05.892 18:58:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.892 [2024-11-28 18:58:35.382817] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:05.892 [2024-11-28 18:58:35.382933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102427 ] 00:18:06.177 [2024-11-28 18:58:35.523414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:06.177 [2024-11-28 18:58:35.561758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.177 [2024-11-28 18:58:35.606773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.177 [2024-11-28 18:58:35.606970] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:06.177 [2024-11-28 18:58:35.606995] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:06.177 [2024-11-28 18:58:35.607007] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.177 00:18:06.177 real 0m0.432s 00:18:06.177 user 0m0.172s 00:18:06.177 sys 0m0.156s 00:18:06.177 ************************************ 00:18:06.177 END TEST bdev_json_nonenclosed 00:18:06.177 ************************************ 00:18:06.177 18:58:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.177 18:58:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:06.453 18:58:35 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:06.453 18:58:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:06.453 18:58:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.453 18:58:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.453 ************************************ 00:18:06.453 START TEST bdev_json_nonarray 00:18:06.453 ************************************ 00:18:06.453 18:58:35 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:06.453 [2024-11-28 18:58:35.886520] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:06.453 [2024-11-28 18:58:35.886643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102453 ] 00:18:06.453 [2024-11-28 18:58:36.028015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:06.713 [2024-11-28 18:58:36.065095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.713 [2024-11-28 18:58:36.104896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.713 [2024-11-28 18:58:36.105085] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:06.713 [2024-11-28 18:58:36.105173] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:06.713 [2024-11-28 18:58:36.105197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.713 00:18:06.713 real 0m0.421s 00:18:06.713 user 0m0.171s 00:18:06.713 sys 0m0.145s 00:18:06.714 18:58:36 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.714 ************************************ 00:18:06.714 END TEST bdev_json_nonarray 00:18:06.714 ************************************ 00:18:06.714 18:58:36 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:06.714 18:58:36 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:06.714 00:18:06.714 real 0m36.609s 00:18:06.714 user 0m48.684s 00:18:06.714 sys 0m5.574s 00:18:06.714 18:58:36 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.714 18:58:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.714 ************************************ 00:18:06.714 END TEST blockdev_raid5f 00:18:06.714 ************************************ 00:18:06.974 18:58:36 -- spdk/autotest.sh@194 -- # uname -s 00:18:06.974 18:58:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:06.974 18:58:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.974 18:58:36 -- common/autotest_common.sh@10 -- # set +x 00:18:06.974 18:58:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:06.974 18:58:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:06.974 18:58:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:06.974 18:58:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:06.974 18:58:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.974 18:58:36 -- common/autotest_common.sh@10 -- # set +x 00:18:06.974 18:58:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:06.974 18:58:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:06.974 18:58:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:06.974 18:58:36 -- common/autotest_common.sh@10 -- # set +x 00:18:09.518 INFO: APP EXITING 00:18:09.518 INFO: killing all VMs 00:18:09.518 INFO: killing vhost app 00:18:09.518 INFO: EXIT DONE 00:18:09.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:09.778 Waiting for block devices as requested 00:18:10.038 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:10.038 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:10.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:10.980 Cleaning 00:18:10.980 Removing: /var/run/dpdk/spdk0/config 00:18:10.980 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:10.980 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:10.980 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:10.980 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:10.980 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:10.980 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:10.980 Removing: /dev/shm/spdk_tgt_trace.pid70681 00:18:10.980 Removing: /var/run/dpdk/spdk0 00:18:10.980 Removing: /var/run/dpdk/spdk_pid100508 00:18:10.980 Removing: /var/run/dpdk/spdk_pid100824 00:18:10.980 Removing: /var/run/dpdk/spdk_pid101491 00:18:10.980 Removing: /var/run/dpdk/spdk_pid101756 00:18:10.980 Removing: /var/run/dpdk/spdk_pid101796 00:18:10.980 Removing: /var/run/dpdk/spdk_pid101827 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102054 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102216 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102302 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102385 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102427 00:18:10.980 Removing: /var/run/dpdk/spdk_pid102453 00:18:10.980 Removing: /var/run/dpdk/spdk_pid70518 00:18:10.980 Removing: /var/run/dpdk/spdk_pid70681 00:18:10.980 Removing: /var/run/dpdk/spdk_pid70883 00:18:10.980 Removing: /var/run/dpdk/spdk_pid70970 00:18:10.980 Removing: /var/run/dpdk/spdk_pid70999 00:18:10.980 Removing: /var/run/dpdk/spdk_pid71105 00:18:10.980 Removing: /var/run/dpdk/spdk_pid71123 00:18:10.980 Removing: /var/run/dpdk/spdk_pid71311 00:18:10.980 Removing: /var/run/dpdk/spdk_pid71390 00:18:10.980 Removing: /var/run/dpdk/spdk_pid71464 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71564 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71650 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71684 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71715 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71791 00:18:11.241 Removing: /var/run/dpdk/spdk_pid71903 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72328 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72370 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72424 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72441 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72503 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72515 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72584 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72600 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72642 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72660 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72702 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72720 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72860 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72891 00:18:11.241 Removing: /var/run/dpdk/spdk_pid72980 00:18:11.241 Removing: /var/run/dpdk/spdk_pid74137 00:18:11.241 Removing: /var/run/dpdk/spdk_pid74343 00:18:11.241 Removing: /var/run/dpdk/spdk_pid74472 00:18:11.241 Removing: /var/run/dpdk/spdk_pid75071 00:18:11.241 Removing: /var/run/dpdk/spdk_pid75272 00:18:11.241 Removing: /var/run/dpdk/spdk_pid75401 00:18:11.241 Removing: /var/run/dpdk/spdk_pid76005 00:18:11.241 Removing: /var/run/dpdk/spdk_pid76319 00:18:11.241 Removing: /var/run/dpdk/spdk_pid76448 00:18:11.241 Removing: /var/run/dpdk/spdk_pid77782 00:18:11.241 Removing: /var/run/dpdk/spdk_pid78020 00:18:11.241 Removing: /var/run/dpdk/spdk_pid78149 00:18:11.241 Removing: /var/run/dpdk/spdk_pid79479 00:18:11.241 Removing: /var/run/dpdk/spdk_pid79721 00:18:11.241 Removing: /var/run/dpdk/spdk_pid79850 00:18:11.241 Removing: /var/run/dpdk/spdk_pid81180 00:18:11.241 Removing: /var/run/dpdk/spdk_pid81619 00:18:11.241 Removing: /var/run/dpdk/spdk_pid81749 00:18:11.241 Removing: /var/run/dpdk/spdk_pid83169 00:18:11.241 Removing: /var/run/dpdk/spdk_pid83417 00:18:11.241 Removing: /var/run/dpdk/spdk_pid83552 00:18:11.241 Removing: /var/run/dpdk/spdk_pid84965 00:18:11.241 Removing: /var/run/dpdk/spdk_pid85218 00:18:11.241 Removing: /var/run/dpdk/spdk_pid85348 00:18:11.241 Removing: /var/run/dpdk/spdk_pid86769 00:18:11.241 Removing: /var/run/dpdk/spdk_pid87240 00:18:11.241 Removing: /var/run/dpdk/spdk_pid87369 00:18:11.241 Removing: /var/run/dpdk/spdk_pid87500 00:18:11.241 Removing: /var/run/dpdk/spdk_pid87907 00:18:11.241 Removing: /var/run/dpdk/spdk_pid88623 00:18:11.241 Removing: /var/run/dpdk/spdk_pid88989 00:18:11.241 Removing: /var/run/dpdk/spdk_pid89664 00:18:11.241 Removing: /var/run/dpdk/spdk_pid90089 00:18:11.241 Removing: /var/run/dpdk/spdk_pid90836 00:18:11.502 Removing: /var/run/dpdk/spdk_pid91228 00:18:11.502 Removing: /var/run/dpdk/spdk_pid93148 00:18:11.502 Removing: /var/run/dpdk/spdk_pid93582 00:18:11.502 Removing: /var/run/dpdk/spdk_pid94000 00:18:11.502 Removing: /var/run/dpdk/spdk_pid96036 00:18:11.502 Removing: /var/run/dpdk/spdk_pid96510 00:18:11.502 Removing: /var/run/dpdk/spdk_pid97010 00:18:11.502 Removing: /var/run/dpdk/spdk_pid98048 00:18:11.502 Removing: /var/run/dpdk/spdk_pid98361 00:18:11.502 Removing: /var/run/dpdk/spdk_pid99276 00:18:11.502 Removing: /var/run/dpdk/spdk_pid99593 00:18:11.502 Clean 00:18:11.502 18:58:40 -- common/autotest_common.sh@1453 -- # return 0 00:18:11.502 18:58:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:18:11.502 18:58:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.502 18:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:11.502 18:58:41 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:18:11.502 18:58:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.502 18:58:41 -- common/autotest_common.sh@10 -- # set +x 00:18:11.502 18:58:41 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:11.502 18:58:41 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:11.762 18:58:41 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:11.762 18:58:41 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:18:11.762 18:58:41 -- spdk/autotest.sh@398 -- # hostname 00:18:11.762 18:58:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:11.762 geninfo: WARNING: invalid characters removed from testname! 00:18:38.336 18:59:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:39.717 18:59:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:42.256 18:59:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:44.163 18:59:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:46.072 18:59:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:47.984 18:59:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:50.527 18:59:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:50.527 18:59:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:50.527 18:59:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:50.527 18:59:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:50.527 18:59:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:50.527 18:59:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:50.527 + [[ -n 6161 ]] 00:18:50.527 + sudo kill 6161 00:18:50.537 [Pipeline] } 00:18:50.553 [Pipeline] // timeout 00:18:50.558 [Pipeline] } 00:18:50.573 [Pipeline] // stage 00:18:50.578 [Pipeline] } 00:18:50.592 [Pipeline] // catchError 00:18:50.601 [Pipeline] stage 00:18:50.603 [Pipeline] { (Stop VM) 00:18:50.616 [Pipeline] sh 00:18:50.899 + vagrant halt 00:18:53.473 ==> default: Halting domain... 00:19:01.621 [Pipeline] sh 00:19:01.905 + vagrant destroy -f 00:19:04.447 ==> default: Removing domain... 00:19:04.461 [Pipeline] sh 00:19:04.747 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:04.757 [Pipeline] } 00:19:04.773 [Pipeline] // stage 00:19:04.777 [Pipeline] } 00:19:04.791 [Pipeline] // dir 00:19:04.796 [Pipeline] } 00:19:04.810 [Pipeline] // wrap 00:19:04.816 [Pipeline] } 00:19:04.829 [Pipeline] // catchError 00:19:04.843 [Pipeline] stage 00:19:04.848 [Pipeline] { (Epilogue) 00:19:04.880 [Pipeline] sh 00:19:05.169 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:09.383 [Pipeline] catchError 00:19:09.385 [Pipeline] { 00:19:09.402 [Pipeline] sh 00:19:09.694 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:09.694 Artifacts sizes are good 00:19:09.704 [Pipeline] } 00:19:09.718 [Pipeline] // catchError 00:19:09.732 [Pipeline] archiveArtifacts 00:19:09.739 Archiving artifacts 00:19:09.846 [Pipeline] cleanWs 00:19:09.859 [WS-CLEANUP] Deleting project workspace... 00:19:09.859 [WS-CLEANUP] Deferred wipeout is used... 00:19:09.866 [WS-CLEANUP] done 00:19:09.868 [Pipeline] } 00:19:09.883 [Pipeline] // stage 00:19:09.889 [Pipeline] } 00:19:09.902 [Pipeline] // node 00:19:09.907 [Pipeline] End of Pipeline 00:19:09.945 Finished: SUCCESS